Not everything is as fast as planned


Just curious: If the Pyra does go out with these sync issues, could they be fixed with a software/firmware update? I read through the thread but I'm still not entirely sure...
 
Just curious: If the Pyra does go out with these sync issues, could they be fixed with a software/firmware update? I read through the thread but I'm still not entirely sure...

yes, nothing in the hardware needs to change
 
If I want a device with *perfect* display quality on the go, I'll reach for my smartphone. If I want a device with (presumably) excellent physical controls, desktop-class software, and plenty of usb and sd card space on the go, I'll reach for my Pyra every time.


It doesn't matter to me if the graphics are only "good enough" or if I need to use a workaround for a little while. It'll be fun to see the Pyra grow and evolve as it gets into more people's hands and as the big developers tweak the drivers.
 
  • Like
Reactions: szr
Don't forget that the screen is also an easily replaceable part later thanks to the display pcb. Upgrades there wouldn't be impossible later on.

 


Regarding bypassing the whole rotation deal in OpenGL applications, how would one do that in the most transparent way?
One could rotate the matrix, but my code uses glLoadIdentity() quite a bit. Render to texture would also work, but is slow (I have no FBOs on 1.1, unless I'm missing something here).

It's certainly possible and actually no biggie for my own code. But it would be cool if the Pyra wouldn't need modified sources for almost every app like we did on the Pandora. So yeah, working tear-free rotation would be really neat. Tearing in games is a bit meh.

At least an automatic rotate mode can probably be coded inside glshim. It will handle the display then (and automatically turn off rotator at load and on at exit, using a environement variable). The mouse will still need to be handled by the game (invert X / Y), but that's a start.
 
Last edited by a moderator:
EvilDragon said:
2 hours ago, EvilDragon said:


Well, I know there's a driver for the OMAP2+ DSS that supports command mode:


http://lxr.free-electrons.com/source/Documentation/arm/OMAP/DSS
That's the one where each program needs to send a command to update the screen.


Well, yes, of course. Every command mode driver behaves like that.


There surely doesn't exist a command mode driver that is hacked like we need it to be hacked.


But that driver could be used to modify it so it constantly updates the frames, so that the programs can use it like the video stream driver (no modification needed) whereas for the display it still is the command mode driver that can use the anti-tearing function.
 
  • Like
Reactions: szr
Yes, it's annoying that it can't simply use the video input timings, but you could also say it's annoying that all SoCs ignore the anti-tearing signal when run in video mode...



I don't want to belabor this point too much, but most displays synchronize on their inputs, not the other way around. And there are actually some good reasons for this. If the same clock domain controls the display controller, processor, audio etc it's a lot easier to keep these things synchronized without worrying about time shifting anything or checking synchronization. So for example, you don't have to dynamically measure audio vs video frequency and resample or suffer missed/dropped frames, and you don't have to continually wait for vsync to actually hit vsync reliably. This is a nice edge for emulators.


So I'm going to say that even if you get the rotation chip working this way somehow it'll be good for people to bypass it anyway in some fullscreen apps. I just hope that support for the 2D accelerator materializes or the overhead of uploading textures to the GPU proves to be low enough.
 
At least an automatic rotate mode can probably be coded inside glshim. It will handle the display then (and automatically turn off rotator at load and on at exit, using a environement variable). The mouse will still need to be handled by the game (invert X / Y), but that's a start.

Unless...
 
Last edited by a moderator:
This is a mole hill being turned into a mountain, really ludicrous !


*if* and its a big if, you assume the rotator chip is "useless" the GPU can rotate with negligible overhead both in terms of battery and cpu/gpu power - heck transformations like this are basically exactly what a GPU is designed to do...


you could even go for something brainless like a daemon waiting for GL sync and kicking the rotator each frame... (and if that was too late you could use it as a trigger for the kick that the rotator needs next frame)


There are MANY ways to "skin a cat" (English idiom - many way to do the same thing)  making a big drama out of it just looks like poor form... 
 
EvilDragon said:
10 hours ago, EvilDragon said: Yes, it's annoying that it can't simply use the video input timings, but you could also say it's annoying that all SoCs ignore the anti-tearing signal when run in video mode...
 
I don't want to belabor this point too much, but most displays synchronize on their inputs, not the other way around. And there are actually some good reasons for this. If the same clock domain controls the display controller, processor, audio etc it's a lot easier to keep these things synchronized without worrying about time shifting anything or checking synchronization.
Yes, when the display is directly connected to the SoC. This also works without any issues.


However, whenever there's a MIPI-bridge (like the rotator chip or any MIPI to HDMI or whatever-bridge) involved, it's different.


There's the anti-tearing signal, and that one ALWAYS is an output from the bridge and an input on the SoC.


Don't ask me why - but that's what the MIPI commitee came up with.


We actually fell into that trap as well, as we thought this is a sync signal from the OMAP to the bridge (which would make a lot more sense in our opinion).


I can imagine that the reason they do it like this is the way the command mode works.


As mentioned, in command mode, the SoC only sends a frame whenever content changes. So if a smartphone shows the dashboard which doesn't change for a minute or so, it doesn't send a stream which reduces power usage.


So there's no sync coming from the SoC to the display or bridge during that time.


So the display controller (they usually have a framebuffer) has to keep the sync.


When the SoC wants to send one or more frames because the content is updated, it let's the display/bridge know which then sends the anti-tear-signal (which is a basically a 'send the frame now'-signal).


With that it mind, it makes sense the sync is coming from the displaycontroller / bridge to the SoC and not the other way round.


The rotator chip we're using is being used in millions of chinese android smartphones, all using command mode with the anti-tearing signal coming from the chip to the SoC.
 
What I'm asking myself though is... there seem to be others who have used command mode panels with Linux.


Here is an example panel driver (command mode) in use with the OMAP:


git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/video/omap2/displays/panel-taal.c?id=c16fa4f2ad19908a47c63d8fa436a1178438c7e7


How did they solve the issue with the non-updating signal? I can't imagine they recompiled all of the Linux apps to work with command mode.


What's the trick here?
 
@EvilDragon I don't know if this is related: https://www.mail-archive.com/linux-omap@vger.kernel.org/msg86047.html but "Taal panel driver has support to set rotation and mirroring. However, these features cannot be used without causing tearing, and are never used. The code is just extra bloat, so let's remove it", it is something I found whilst trying to understand how the code you linked to works (looking at the code, I couldn't understand how it would avoid tearing).
 
Last edited by a moderator:
The only reason any of us know about prototype or beta level issues at all is because of the openness of this project.  Making buying decision ultimatums based on pre-prototype software status is a lot like loudly threatening to get off the bus at the next stop.


Ed, ignore the few fools trolling for your response.  Get back to work on those prototypes. :)

With all respect, but you're confusing ultimatums with options. Respect for others opinions and options should remain, lets not slide to the cheap insult here.
 
Last edited by a moderator:
Whilst I respect and admire notaz's expertise in programming I am sure that the only important fact now was expressed by ED:


If the Pyra isn't going to market now it never will! So we can all discuss whether or not some tearing is important for one's own use case. But if there is no actual device the discussion is futile.


And what did we expect at this point of time? A perfect Pyra? In my opinion there are two things we should have learned in al the Pandora years:


1. There will be obstacles all around and we will encounter most of them. Sometimes it will be hard to get over them.


2. But if there is a community that is able to solve this problems, it is this. If there is a person I trust in addressing this problems and help solving them if not solve them by himself, it is ED.


When the Pre-Pre-Order was announced I strongly fought with myself if I should order one. It's a reasonable amount of money for a Pyra which I definitely knew would not be a polished product. So what? It's the only electronic device I can think of that will be serviced perfectly even in some years. If there will (and only if!) there will be need for a new screen there also will be an offer for replacing the screen. If I have to update the os/firmware/driver... I just will do so. And if there is nobody at the moment who can fix the issue I will patiently wait for someone. It's as simple as that. We all know the past issues with our Pandoras, don't we?


If I want a perfectly working device I can't even buy one from any major company and I don't trust them to repair that device even if a flaw is obvious.
 
 
To me it is similar to the GPU (and the other more obscure features of the SoC). There is a chance that some hardware features will never have software support at all or have support that won't be upstreamed. None of these make the Pyra totally useless, but IMO costs for paid kernel development and upstreaming should still be factored in when calculating the unit price besides looking for hardware solutions (e.g. a landscape display).
 
Well, once the Pyra sells, there is money coming in which can be used to pay some fulltime development. That's what I plan. The better it sells, the more money is available for that. :)
 
Well, once the Pyra sells, there is money coming in which can be used to pay some fulltime development. That's what I plan. The better it sells, the more money is available for that. :)

Finding good developers is the challenge then, wasn't there some bad experiences during the Pandora development?
 
The device is already fully usable, even if you have tearing until the software solution is found.

And that's what really matters.

And if such a display EVER appears, the old units can be upgraded - we're modular, all you'd need to do is to replace the display PCB and the LCD and that's it.

Exactly, just like with the CPU/SoC board, good modular designs open up a lot of possibilities, which is why I will be more than comfortable to go for a first run Pyra. I wish to support this project any way that I can.

What would you all prefer?


Should I now stop with everything, probably file for bankruptcy in a while (unless the magical hardware fix that needs no software work appears suddenly) or continue finishing the unit?

I full heatedly believe that this is a project that is well worth being behind. Obviously with the world as it is it's hard for things to always go perfectly and it's absolutely amazing just how well things have been going thus far with this project. There's a lot of people that just need to step back and take a good deep breath.
 
  • Like
Reactions: rSl
What I'm asking myself though is... there seem to be others who have used command mode panels with Linux.


Here is an example panel driver (command mode) in use with the OMAP:


git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/video/omap2/displays/panel-taal.c?id=c16fa4f2ad19908a47c63d8fa436a1178438c7e7


How did they solve the issue with the non-updating signal? I can't imagine they recompiled all of the Linux apps to work with command mode.


What's the trick here?

Can you ask them? As long as your project is not directly competing with theirs, they should be able to help.
 
Back
Top