Rethinking PNDs and standard packaging approach


Wally

I am a banana!
Joined
Jan 31, 2006
Messages
3,213
Age
37
Location
Melbourne, Australia
Yesterday I was playing with packaging a PND manually to try and see why the PNDMakeauto thing was not working. If we are to continue using PNDs on the pyra then we should start to consider the options now before the OS is ready to rumble.

Few problems I can think of with the Current PND system:

* Too much overhead - We insert a lot of libraries into the PND (When we compile with a newer lib). g df

* No standards of compression (I guess this really doesn't matter too much but I think we should choose one way or the other to make things    less complex, easier to script and document for user installs) I think people will vote SquashFS here as the tools are readily available and Windows / OSX Users can expand / compress a lot easier because of the loop back not being available. hurr durr

*Software Categories - B-Zar made me aware of a software category standard (Can't remember where so no link sorry), we should probably try and implement this properly :D (please remind me if you know what i'm talking about)

Maybe some goals to aim for:

* Backwards compatibility with Pandora (If both OS's work together)

* Put as much of the clutter into the OS as possible if said libraries are introduced

* Minimum OS requirement category?

It's a start at least and everything of course is debatable as long as we keep it On-Topic (Pretty important as we're discussing the future of the PND).


EDIT: most of this stuff has been covered on the Wiki http://pandorawiki.org/New_PND_format#The_2014.2FPyra_effort but I feel we need to have a discussion for it if we don't have one. :/
 
Last edited by a moderator:
* Too much overhead - We insert a lot of libraries into the PND (When we compile with a newer lib). g df
But isn't that what the pnd system is all about, providing self contained applications with minimum reliability on libs outside of the pnd ?
* No standards of compression (I guess this really doesn't matter too much but I think we should choose one way or the other to make things    less complex, easier to script and document for user installs) I think people will vote SquashFS here as the tools are readily available and Windows / OSX Users can expand / compress a lot easier because of the loop back not being available. hurr durr
Isn't suqashfs already quasi standard ? What downsides are there if more than one standard is used ?
*Software Categories - B-Zar made me aware of a software category standard (Can't remember where so no link sorry), we should probably try and implement this properly :D (please remind me if you know what i'm talking about)
Afaik the OP team tried to stick to the categories freedesktop.org recommends
 
Last edited by a moderator:
The software categories follow the freedesktop standard and the minimum OS tag is also there (but not used right now).
 
But isn't that what the pnd system is all about, providing self contained applications with minimum reliability on libs outside of the pnd ?
That was part of the original requirements, sure, but in practice I don't really see what it has gained us as technology has advanced.

On the other hand, the requirements that derive from them being installed to removable filesystems are still valid IMO, and cover some of the same area.
 
But isn't that what the pnd system is all about, providing self contained applications with minimum reliability on libs outside of the pnd ?
That was part of the original requirements, sure, but in practice I don't really see what it has gained us as technology has advanced.
On the other hand, the requirements that derive from them being installed to removable filesystems are still valid IMO, and cover some of the same area.
I'm far away from having much knowledge in that sector, so forgive me if I'am talking nonsense here:What advancement in technology are you referring to ? Putting libraries into the pnd was an easy way of keeping dependencies from libraries, the system has to offer to a minimum. Which helps a lot in keeping things portable with the price of the pnds beeing bigger. Which I don't consider a real problem (which is easy as I don't get a bill for the server, unlike Ed) as the NAND will be bigger and big SD-Cards (>32 GB) are in an affordable price region nowadays, and we can assume that the internal Wifi will allow for better download speeds on the Pyra.
 
Last edited by a moderator:
as the NAND will be bigger
Yes, which means more software will likely to be "installed" through Debian's package system, rather than requiring a Pyra repo to download a .PND.
IMO, it's difficult to say at this point how much Debian and the larger NAND will affect the need for a PND style packaging system. I'll certainly be able to throw away that monsterous (referring to size, not that it's bad) Code::Blocks PND. I can install gcc, codeblocks, qtcreator, mono etc. on the Pyra.
 
Last edited by a moderator:
as the NAND will be bigger
Yes, which means more software will likely to be "installed" through Debian's package system, rather than requiring a Pyra repo to download a .PND.

IMO, it's difficult to say at this point how much Debian and the larger NAND will affect the need for a PND style packaging system. I'll certainly be able to throw away that monsterous (referring to size, not that it's bad) Code::Blocks PND. I can install gcc, codeblocks, qtcreator, mono etc. on the Pyra.
I thought that such applications aren't meant to be distributed as pnds anymore anyway, just the ones that are optimized/adjusted for the Pyra (?)
 
Internal memory will still be limited, just not as much as on the Pandora (and this will always be the case for mobile systems, at least comparatively more so than in stationary ones). Therefore everything is still a PND candidate. IMO especially large optional components like office and development tools, though there is now the additional option of using the distributon (or other repository) provided package.


Being distributed as a PND has nothing to do with being optimized for the system. A Pyra .deb package repository will exist just like now a Pandora .ipk one exists which distributes specially adjusted versions of standard packages. PNDs are about the ability to run software (not only data) from a removable media that actually can be removed from the running system. An alternative could be an Amiga style logical volume system with 'Please insert volume xxx into any drive' requesters, but I don't know how feasible that is within the linux vfs layer. Additionally PNDs allow to bundle their own dependencies which trades space (on the removable media, where it is cheap) for easier (actually more distributed) compatibility management.


Looking into my crystal ball I see fewer PNDs at first because a lot of reasonably recent packages are provided by the standard repositories. Later when newer gadgets become available that depend on experimental lib versions that aren't fully compatible with the stable ones in the distribution all the PNDs that had been replaced by standard packages will reappear.
 
Last edited by a moderator:
But isn't that what the pnd system is all about, providing self contained applications with minimum reliability on libs outside of the pnd ?
That was part of the original requirements, sure, but in practice I don't really see what it has gained us as technology has advanced.
I'm far away from having much knowledge in that sector, so forgive me if I'am talking nonsense here:
What advancement in technology are you referring to ? Putting libraries into the pnd was an easy way of keeping dependencies from libraries, the system has to offer to a minimum. Which helps a lot in keeping things portable with the price of the pnds beeing bigger. Which I don't consider a real problem (which is easy as I don't get a bill for the server, unlike Ed) as the NAND will be bigger and big SD-Cards (>32 GB) are in an affordable price region nowadays, and we can assume that the internal Wifi will allow for better download speeds on the Pyra.
That's exactly all I mean - newer versions of libs with bug fixes get released. We've been significantly protected from that on Angstrom versus how it will be on Debian, as we managed to pick a dying platform, but on Debian things will get updated all the time and dependencies on newer stuff will get baked into all sort of components. I guess there's still a lot of scope for games that only vaguely depend on SDL versions and so on, and indeed aren't typically distributed via the Debian repositories.

Some of the recent PNDs do take rather a long time to download though, so if libs from those could be migrated to the update repos then that's an improvement as I see it, though upgrading existing libs might cause issues for older games and software, as the SGX video driver does occasionally (though that's largely down it it being closed source, and should be less of a problem with open source libs hopefully).


I'd propose that the idea of static platforms akin to the old console or home computer model is less valid these days on a networked device that's potentially open to all sorts of attacks that will mandate user updates. So the idea of static games living on the modern equivalent of floppies or CDs is also less valid, and we should get used to software needing maintainers and not just authors.
 
Some kind of mechanism to separate data and core would be nice (as it is often done in games in gnu/linux distros, e.g. in debian there is "wesnoth-core", "wesnoth-music", "wesnoth-data" etc. (and one meta-package "wesnoth" to install everything), so you don't always have to download hundreds of megabytes just to get a bugfix or some new levels.

That or have an automated incremental diff mechanism.

Also I'm not in favor of redundantly including copies of general-purpose libraries inside the package (or using statically linked binaries), not so much because of the size overhead, but mostly because that makes it impossible to have an update/bugfix of such libraries without having to update all packages that depend on it. Imagine a crucial security bug in libpng or libssl or whatever -- if it can be fixed just by updating the library, it's not that big of a deal, but if you also have to hope that all authors of all packages that depend on it will repackage their stuff, then it may take ages before the problem is solved. Some of this could be somewhat solved by having an automatic compilation+packaging system (cf. what Cloudef is doing), but even then you still get the annoying problem of needing to re-download potentially a lot of packages each time a widely used library gets updated. Of course if you're using very specific libraries that are not worthwhile to have installed globally, it's something else.

The same holds for other kinds of dependencies: if you have a python/perl/java application, you shouldn't have to include an entire python/perl/java system in the package just for self-containment. Instead there should be a way to specify which debian packages your pyra package depends on, and on install (and perhaps on execution) a check should happen to see if those packages are installed, and if not, a dialog should pop up asking the user whether he wants to install those dependencies or abort. That's the only sane way to do it, if you want to be able to have pyra packages that depend on things like gnuplot, latex, R, lua, prolog or whatever big component that is not expected to be installed by default but is useful more generally than just for that particular pyra package.

Finally I think not all of the metadata should be included in the package itself. Of course the textual metadata, the icon, and perhaps one screenshot should be inside the package. But the rest of the screenshots, and perhaps videos or whatever, they should not be included in the package itself. There simply is no point in that. You want to look at those screenshots (and videos and whatever) before you download the application, and once you've downloaded it, you don't need them anymore. They should be hosted only on the repo server, and it should be possible to update them without having to modify the package itself.
 
Instead there should be a way to specify which debian packages your pyra package depends on, and on install (and perhaps on execution) a check should happen to see if those packages are installed, and if not, a dialog should pop up asking the user whether he wants to install those dependencies or abort. That's the only sane way to do it, if you want to be able to have pyra packages that depend on things like gnuplot, latex, R, lua, prolog or whatever big component that is not expected to be installed by default but is useful more generally than just for that particular pyra package.
But wont such programs be better put in the debian repository anyway, where debian dependency tracking already works?
 
It depends on how big they are. The NAND is only 2GB. Perhaps there should be no NAND at all, and instead a MicroSD card.
 
Instead there should be a way to specify which debian packages your pyra package depends on, and on install (and perhaps on execution) a check should happen to see if those packages are installed, and if not, a dialog should pop up asking the user whether he wants to install those dependencies or abort. That's the only sane way to do it, if you want to be able to have pyra packages that depend on things like gnuplot, latex, R, lua, prolog or whatever big component that is not expected to be installed by default but is useful more generally than just for that particular pyra package.
But wont such programs be better put in the debian repository anyway, where debian dependency tracking already works?
Not if they're too big to fit on the internal storage, or too rarely used to have to be constantly available. There are advantages of the PND system, and it makes sense to keep those advantages. "Cartridge-like" SD cards that contain software that becomes immediately available once you insert the card, that's pretty nice. Games can be quite big (mostly because of media files), so we probably still want to package them in a PND-like way, even if we would have lots of internal storage. But they can also have lots of dependencies (especially open source games).

On the Pandora the general approach was: one fixed set of opkg packages is pre-installed (including lots of commonly used things like SDL, perl, python, but not other commonly used things like java), and this set can essentially only grow because otherwise PNDs get broken. But it can't really grow because of limited space. So you need to make a real good guess of what is likely to be and remain the most important stuff to put in that set.

For the Pyra I propose a different approach, where you start with a more minimal pre-installed set, and new packages can be installed by the user (using synaptic or something like that) or are auto-installed when you want to install a pyra PND package that has unfulfilled dependencies.

So if you don't need something, you don't have to waste internal storage on it. And most importantly: we don't have to make any decisions about what to put in the pre-installed base system and what not to put in there, decisions that will potentially haunt us for the rest of the device lifetime.

On the Pandora space is very limited so it was reasonable to put in only the very basics in the pre-installed system. But if we have 2GB or 4GB to work with, it becomes harder to make a partition between what is "useful enough" to put in the base system and what is "too exotic" and should be put in the PND instead. E.g. which of the interpreted (or runtime-requiring) programming languages do we need to put in the base system? Probably bash, perl, python again. Probably also Java. What about Lua? Ruby? Haskell? Erlang? Common Lisp? Scheme? ML? Scala? Prolog? Mercury? Oz? PHP? Tcl? etc etc

Where do you put the cut-off? It's impossible to predict the future and know now which one of these will turn out to be the most important.

If someone wants to make an automated build service that creates "stand-alone" Pyra packages that recursively include all their runtime dependencies (except for the really basic ones like bash) and use static linking to all their libraries (except the really basic ones like libc), then that's perfectly fine with me -- it could be useful when you want to be absolutely sure that you can use your SD card on any Pyra you encounter, even one that does not have the deps installed and is in the middle of nowhere without internet access to auto-install the deps it is missing. But I think the norm should be "light" Pyra packages with meta-data specifying their dependencies and an install system that does the necessary apt-gets.

From the point of view of the pyra package specification, both types of packages could be supported: the "stand-alone" one would just have its "dependencies" field empty.
 
Internal memory will still be limited, just not as much as on the Pandora (and this will always be the case for mobile systems, at least comparatively more so than in stationary ones). Therefore everything is still a PND candidate. IMO especially large optional components like office and development tools, though there is now the additional option of using the distributon (or other repository) provided package.
Why should the internal memory be limited ? The last news in that regard where, that either NAND or an MicroSD-Card can be used as non removeable storage. While theoretically everything is a pnd candidate, I doubt, that many applications, readily available in the debian repository, will see a rerelease as a pnd. There may be some, but especially non games with often times slow release cycles won't generate much interest on taking on the struggle - especially if they are rather "hard" to handle and transition into a pnd has some drawbacks (like it is with Firefox currently, no offense, hdonk).And even with development tools and the rare heavier packages (like Liobreoffice), I doubt you will fill up your NAND quickly. Media (gfx, audio, video) are the ones that will eat away the internal storage, so games is the real factor, which are inherently are a lot more viable candidates:

Being distributed as a PND has nothing to do with being optimized for the system. A Pyra .deb package repository will exist just like now a Pandora .ipk one exists which distributes specially adjusted versions of standard packages.
Just a matter of how you see things. For the reasons given above, I see only applications/games with a fast release cycle, or that need rather heavy adjustments,that won't make it upstream, (I.E mostly games) to be likely candidates for someone to create and maintain a pnd.And there is also the possibility to create custom debian packages and distribute them, if the debian repo only offers a rather outdated version of an application/game

Looking into my crystal ball..
Mines from a different manufacturer. I see a lot less use of pnds.
That's exactly all I mean - newer versions of libs with bug fixes get released. We've been significantly protected from that on Angstrom versus how it will be on Debian, as we managed to pick a dying platform, but on Debian things will get updated all the time and dependencies on newer stuff will get baked into all sort of components. I guess there's still a lot of scope for games that only vaguely depend on SDL versions and so on, and indeed aren't typically distributed via the Debian repositories.

Some of the recent PNDs do take rather a long time to download though, so if libs from those could be migrated to the update repos then that's an improvement as I see it, though upgrading existing libs might cause issues for older games and software, as the SGX video driver does occasionally (though that's largely down it it being closed source, and should be less of a problem with open source libs hopefully).
But won't that be a problem then in the future, negating the pnd idea? If you think that essential libraries are updated a lot more frequently than it is now, wouldn't that lead to the conclusion that in order to keep some sort of "plug and play" functionaliy left for pnds, you have to include even more libs into a pnd? Otherwise people would need to constantly keep their system up to date to make sure they can run the latest things released on the repo.
That or have an automated incremental diff mechanism.
Seems a lot more reasonable to hope someone implents this than hoping that every single dev interested in the plattform is willing/able to pull of modularizing their work.
Also I'm not in favor of redundantly including copies of general-purpose libraries inside the package (or using statically linked binaries), not so much because of the size overhead, but mostly because that makes it impossible to have an update/bugfix of such libraries without having to update all packages that depend on it. Imagine a crucial security bug in libpng or libssl or whatever -- if it can be fixed just by updating the library, it's not that big of a deal, but if you also have to hope that all authors of all packages that depend on it will repackage their stuff, then it may take ages before the problem is solved. Some of this could be somewhat solved by having an automatic compilation+packaging system (cf. what Cloudef is doing), but even then you still get the annoying problem of needing to re-download potentially a lot of packages each time a widely used library gets updated. Of course if you're using very specific libraries that are not worthwhile to have installed globally, it's something else.
How often will that happen in reality (I have no experience with debian) ? And I don't see a need for every package beeing updated with new lib version. If the whole package still works, why change something in it ? There could be security concerns, but how many applications/games will affect that?
Finally I think not all of the metadata should be included in the package itself. Of course the textual metadata, the icon, and perhaps one screenshot should be inside the package. But the rest of the screenshots, and perhaps videos or whatever, they should not be included in the package itself. There simply is no point in that. You want to look at those screenshots (and videos and whatever) before you download the application, and once you've downloaded it, you don't need them anymore. They should be hosted only on the repo server, and it should be possible to update them without having to modify the package itself.
I disagree. While things like videos unneccesarily increase the size of a pnd, I don't see any reason to limit the amount of screenshots - what harm is done by including several kbs of pictures ?
It depends on how big they are. The NAND is only 2GB. Perhaps there should be no NAND at all, and instead a MicroSD card.
Where does this number come from ? The last time people were throwing arguments at each other about that topic, Ed leaned more towards 16GB, while some community member wanted to slap the extra money needed for 32GB onto the sales price
On the Pandora space is very limited so it was reasonable to put in only the very basics in the pre-installed system. But if we have 2GB or 4GB to work with, it becomes harder to make a partition between what is "useful enough" to put in the base system and what is "too exotic" and should be put in the PND instead. E.g. which of the interpreted (or runtime-requiring) programming languages do we need to put in the base system? Probably bash, perl, python again. Probably also Java. What about Lua? Ruby? Haskell? Erlang? Common Lisp? Scheme? ML? Scala? Prolog? Mercury? Oz? PHP? Tcl? etc etc

Where do you put the cut-off? It's impossible to predict the future and know now which one of these will turn out to be the most important.
While you cerntainly can't predict the future, you can at least make estimations about what will be commonly used, this is no new territory here, most conclusion could be drawn from experiences on desktop debian installations and the content of pnds currently on the repo.
 
Last edited by a moderator:
Also I'm not in favor of redundantly including copies of general-purpose libraries inside the package (or using statically linked binaries), not so much because of the size overhead, but mostly because that makes it impossible to have an update/bugfix of such libraries without having to update all packages that depend on it. Imagine a crucial security bug in libpng or libssl or whatever -- if it can be fixed just by updating the library, it's not that big of a deal, but if you also have to hope that all authors of all packages that depend on it will repackage their stuff, then it may take ages before the problem is solved. Some of this could be somewhat solved by having an automatic compilation+packaging system (cf. what Cloudef is doing), but even then you still get the annoying problem of needing to re-download potentially a lot of packages each time a widely used library gets updated. Of course if you're using very specific libraries that are not worthwhile to have installed globally, it's something else.
How often will that happen in reality (I have no experience with debian) ? And I don't see a need for every package beeing updated with new lib version. If the whole package still works, why change something in it ? There could be security concerns, but how many applications/games will affect that?
 
Back
Top