Should a hardware display layer be reserved for the OS?


codifies

Very Active Member
Joined
Sep 29, 2014
Messages
360
As I understand the display hardware there are a small number of independent display layers.


The idea I've had is to write a kernel driver for the keyboard that would make use of a dedicated layer, with the ability to consume touch events when needed.


This would mean even at the lowest level without X running you could hold down a special key and a semi transparent grid of symbols would pop up then while the special key was down you could even page through to other symbol pages selecting multiple symbols (keycodes) which would be output.


If each grid item were easily definable so it could issue a character or even run a script, a user could have different configurations for writing Java, Python or even German!


If the X display driver left this layer alone (including GLES) we'd have a great input method, sure it would take a little time to implement and might have to go through a few iterations of user howling be I think it would be nothing short of amazeballs...!
 
It would be nice to use such a thing for the likes of a Wii/Xbox overlay menu too. Kind of like what WizardStan made for the Pandy

You could use it to quickly toggle wifi/bt/3g, terminate unresponsive programs etc.

... and, going back to what I thought about a looong time ago as a part of the Tournament Hub - some C4A information, with maybe minimal chat system or what not (think of the Steam overlay)
 
If each grid item were easily definable so it could issue a character or even run a script,
That was exactly my idea, assuming there was no wifi on/off "grid cell" defined, the user could just make their own graphic and script and add it to their keyboard config!
 
Last edited by a moderator:
For this to work, all input that could be trapped by the grid needs to be proxied by an input daemon of sorts. Also; I don't think the layers supported on the OMAP3 supports opacity, the pixel is either visible or not, so I'm not too optimistic about the OMAP5 having opacity capabilities for layers. I can't connect to Trashy's devboard atm, so I can't check if the driver even presents the other layers :p
 
well line drawn icons would be see through enough!

The touch driver would directly need to send raw events to either the grid or its normal evdev way depending on state

This implies tight integration between keyboard and touchscreen drivers, but as this is a fixed hardware platform thats not an issue...

just to be clear I'm talking about doing this at the lowest possible level, as if its baked into the hardware, so it works universally just as you'd expect normal a normal keyboard to...
 
Also; I don't think the layers supported on the OMAP3 supports opacity, the pixel is either visible or not
IMO that wouldn't really be a problem
Alternatively, when activating, could you not scrape the screen into an image which you could paint as your background on the reserved layer?

This means you could then apply a greyscale/bloom/darkening effect to the screen in your "screen-shot" before drawing.
 
...well you could, but for something low level maybe the KISS principle is preferred!!! :)
 
Your definition of "low level" is skewed. If this would ever be implemented, it would run as a regular userspace process as the logged in user as a normal background program.
 
Your definition of "low level" is skewed. If this would ever be implemented, it would run as a regular userspace process as the logged in user as a normal background program.
No you completely misunderstand, I mean a kernel (evdev) module
 
Last edited by a moderator:
YES! I absolutely do! this should be a fact of hardware - how it works as if implemented in silicon, no user space application should have ANY access, except via its own configuration (deliberately limited - emit keycode or character or run script)

The overlay will ONLY be active while the "special" key is held down ie while you're entering special characters and you'll still be able to see most of the UI through it...
 
Ive written kernel modules and I don't see why its a bad idea...
 
I see Notaz's Live Info is a userspace app, but then I don't think that handles any sort of input. Would you be able to get a userspace application to trap keystrokes before they got to the "active" application?

On a kind of related note - Is this something that's exclusive to OMAPs? How could I find out if my NUC (x86 with Intel HD 6000 running Linux Mint) supports display layers?
 
You *could* do it as a userspace app, but it would always be battling other apps and there could be unforseen interactions

Doing it at the hardware level gives a different type of keyboard device, its a keyboard with a touchscreen element baked in this is much more powerful that the separate items on their own

How could I find out if my NUC (x86 with Intel HD 6000 running Linux Mint) supports display layers?
The hardware documentation for the intel chipset is OPEN, extensive and comprehensive...
 
would you deign to elaborate on why you think its such a bad idea ?

and if there is sufficient interest I will write such...
 
Personally, I'd approach this by making the layer the render target for a Qt program, maybe with an added QImage access to the main framebuffer. This way you could have pretty animated QtQuick UIs there with reasonably little work. If reading pixels from SGX is fast enough, that UI could even be HW accelerated using QSceneGraph. That would open up possibilities for nice shader-based distortions to the underlaying application, like desaturation or blurring :)

The bigger issue IMO is hijacking the input when in overlay mode. You don't want the input reaching both the overlay application and the regular applications beneath it.
 
(I see 3 framebuffer devices on Trashy's devboard, so the OMAP5 should have comparable capailities in this regard as compared to the pandora.)
 
I also don't see a major problem with it being at the kernel device level. That's basically what it is, a device, another keyboard type input.

On the Pandora, the third level is privileged, only root can draw to it anyway, so even if it were running in "user space" it would still need to be run as root anyway: it cannot, by intentional design, be run as the currently logged in user.

The idea was that this third layer would exist for the OS to give feedback and receive input from the user in a way that was (practically, for all intents and purposes) secure. The overlay experiment I did ran as a root user application but it really should have been deeper than that.
 
Back
Top