Pandora PND diffs


I stand corrected, PPF doesn't seem to be able to grow or shrink the file, it's a very straightforward in-place exchange, just for much larger files than IPS can handle.

This BPS seems like it could have worked, but the source download is giving me 404.
 
Yeah, I spent most of the afternoon trying to build xz so I could uncompress it, and then found out he's using C++11. I need to update my BeagleBoard's build environment.
 
I created a very simple test: Create a squashfs, add an empty file to it named "a" so that it appears near the start (where it will do the most "damage"), and then edit that file. All together three archives were created, all 8MB in size.

A bpf diff from the first to the second only used a few K. Adding the empty file had no impact on the data or superblocks, the diff was strictly in the directory table near the end of the file.

A diff between the first and third, and the second and third, however, yielded a 7.5MB file: effectively the inserted data caused a shift which the tool couldn't actually handle, so the "diff" was almost the entire file. In theory it should be able to, the format supports it and I was able to confirm that a huge chunk of data was identical between all three files, but I suspect this is a much harder problem to solve than it initially seems.

TL;DU; Binary diff and patching is theoretically possible between squashfs (and therefore PND) files, but the tools that should be able to create such a diff don't.
 
Last edited by a moderator:
OH SNAP SON! I didn't even notice that little check box.

Scratch everything I said, setting it to delta mode and adding a file near the start of the archive results in a 39K patch file, vastly superior to the 7.5MB patch. Brilliant. The file I added was only 15 bytes, but most of the 39K is probably meta data at the end of the archive.

edit: and just to be sure, I applied the patch and performed an actual diff between it and the alternate file: identical, as they should be.
 
Last edited by a moderator:
Sounds good. :)

I suppose another good test would be to download an archived version of a relatively large PND from the repo (eg Wesnoth) and create a patch to bring it to the current version.   It would be interesting to see how big that patch would be.  I expect a large portion of the size to be game assets that don't necessarily change between versions... (Edit:  I haven't actually looked at the composition of that PND beyond extracting the run script, so I'm speculating on that)

- Neelix
 
Last edited by a moderator:
Fortunately I had a copy of Wesnoth 1.8.6 from 2011 lying around. Lots of changes, an excellent test to see what can really be done. Would I have to download the entire 360MB or can I get away with an 80MB delta? I downloaded the latest and started doing a diff.

Unfortunately it has been running for an hour, consumed 10GB of RAM, and I have no idea how much longer it is going to take. I started it at 10:49pm, I'm going to leave it overnight and see when/if it finishes. Oddly enough, although it has consumed a large amount of RAM and is obviously doing something the CPU is running cold. I'm not sure what it could be doing that would allow it to be idle 98% of the time.

edit: swap. For some reason it was spending 98% of its time waiting on the swapfile. It sucked up 8GB of physical RAM and needed another 2GB and for some reason it just had to refer to all of it constantly. If someone with 16GB of physical RAM wants to give it a try, the setup is pretty simple.
 
Last edited by a moderator:
Maybe it would be a good idea to not to go the whole hog in the beginning. See if a smaller pnd/diff works and then ramp up the size, maybe some of the statistics milkshake has published could help finding the right candidates.

It could be a bug in the code that only occurs after the files exceeds a certain size, after all.
 
These algorithms typically have a space complexity that is linear in the input size, so it boils down to their constant factors. If the factor is 10, then you'll need at least 10GB of RAM to construct a diff between two 500MB files.

We'll need something with a low enough memory usage if we want the repo server to auto-generate the diffs.

If on the other hand the server only has to do merges (just like the client), and it's the uploader's responsibility to make the diff file, then it's up to the uploader to decide whether or not he wants to spend the cpu time to create a diff (and save upload time).
 
It seems to need 15 times the sum of the two files. I took wormux from 2010, a 99MB PND and diffed it against warmux, The most recent version at 106MB. The result was a 35MB patch file. It consumed just over 3GB (it peaked at 3.3 but mostly sat at exactly 3.0) and took 2 hours and 6 minutes. The memory usage is consistent with what wesnoth needed: 10GB for 660MB summed files.
 
Not likely. I haven't really looked at the code, but these things tend to be pretty linear in nature. Maybe it's using an array of longs where it can get away with an array of shorts or even bytes or something, but I wouldn't bet on that being the case, nor on it saving a lot of space even if it were.
 
Well, the test was not successful. I needed my sleep, so I had to kill the process to turn off the computer.

I was creating a patch from Wesnoth 1.8.6 (289 MB) to Wesnoth 1.11.3 (343 MB). When I killed the process it had been running for 16 hours and it consumed 11.7 GB of RAM. The partially created patch file has 77 MB.
 
I don't understand why such a process would take so long or require so much memory...   as it stands it seems to me that these constraints make the process more trouble than its worth...

- Neelix
 
Probably, because it's trying to find an optimal solution - smallest patch file.

I tried xdelta3 on the same files. With default parameters the patch file had 279 MB, with changed parameters 200 MB. The time to create the patch was less than a minute.
 
It's an NP-hard problem to find minimal diffs between two files. So it's not surprising that the tools that can provide small diffs, also have bad time and space complexity.

Given the structure of squashfs, it should be possible to do much better than for arbitrary files by taking the relevant offsets into account. That's probably more work to code though than implementing the unpack, file-level-diff, repack approach.
 
hrm...  I wonder if byuu's algorithm can be adjusted to make it possible to set how aggressive it is about that...   perhaps a happy medium can be found?

- Neelix
 
Back
Top