Free Lossless Image Format (FLIF)


I'll do a compression comparison with some other formats, I'll include BPG too.

I'll put the source on github.

Some of the main ideas behind the compression:

- it uses CABAC for entropy coding, just like FFV1

- for interlacing it uses a generalization of PNG's Adam7; unlike PNG, the geometry of the 2D interlacing is exploited heavily to get better pixel estimation, which means the overhead of interlacing is small (vs simple scanline encoding, which has the benefit of locality so usually compresses better)

- the colorspace is a lossless simplified variant of YIQ, alpha and Y channel are encoded first, chroma channels later

- the real innovation is in the way the contexts are defined for the arithmetic coding: during encoding, a decision tree is constructed (a description of which is encoded in the compressed stream) which is a way to dynamically adapt the CABAC contexts to the specific encoded image. We have called this method "MANIAC", which is a backronym for "Meta-Adaptive Near-zero Integer Arithmetic Coding".

I'll have to write a paper about this at some point to explain everything.
How you've got this? Was it a long term private project, part of your Job or did you woke up one night from a dream and yelled "EUREKA"? :D
 
Last edited by a moderator:
This is really cool, I'd also like to see compaison of compression/decompression speed compared to others.
Comparing decompression speed is a bit tricky, because it depends a lot on how much effort you spend on optimizing the inner loop. For formats like PNG and JPEG, this optimization is done quite well because they exist for a long time, and they date from a time when computers were much slower than they are now.

Comparing compression speed is even trickier, because it depends on how much brute-forcing you want to do to get the size down. E.g. standard PNG compression is faster than my current implementation of FLIF, but if you do pngcrush -brute followed by pngout (that's more or less the way to get the smallest PNG file), then that will take (much) more time than my current implementation of FLIF. There are many degrees of freedom in how to encode in most formats, and I'm sure that brute-force method to encode FLIF would take a huge amount of time and it would get even better file sizes.
 
I'll do a compression comparison with some other formats, I'll include BPG too.


I'll put the source on github.


Some of the main ideas behind the compression:


- it uses CABAC for entropy coding, just like FFV1


- for interlacing it uses a generalization of PNG's Adam7; unlike PNG, the geometry of the 2D interlacing is exploited heavily to get better pixel estimation, which means the overhead of interlacing is small (vs simple scanline encoding, which has the benefit of locality so usually compresses better)


- the colorspace is a lossless simplified variant of YIQ, alpha and Y channel are encoded first, chroma channels later


- the real innovation is in the way the contexts are defined for the arithmetic coding: during encoding, a decision tree is constructed (a description of which is encoded in the compressed stream) which is a way to dynamically adapt the CABAC contexts to the specific encoded image. We have called this method "MANIAC", which is a backronym for "Meta-Adaptive Near-zero Integer Arithmetic Coding".


I'll have to write a paper about this at some point to explain everything.
How you've got this? Was it a long term private project, part of your Job or did you woke up one night from a dream and yelled "EUREKA"? :D
It's a bit of both - a long term private project and somewhat part of my job -- that is, I'm supposed to do computer science research but not exactly on this specific topic...
 
Wow, very very interesting.

Especially in seeing HOW MUCH more it compresses compared to PNG!

I expected a few bytes, but it actually is A LOT of bytes :eek:

I love that. I especially love optimizing such things in a world where most don't even care about that anymore (as everyone thinks it's not worth it because computers are fast enough and size doesn't matter)

Great work!
Actually size matters more than ever, because median bandwidth is actually going down for the first time in the history of the internet: for home computers, bandwidth is still going up, but smartphones with relatively low-bandwidth internet connections are compensating for that.

And about 60% of all the bandwidth internet uses, is used for images (jpeg, png or gif). There's a lot of money to be saved and speed to be gained with better image compression, which is probably why Google is trying to push WebP so much.

The concept I have in mind for my image format is to use one format for everything (photos, diagrams, clipart, line drawings, whatever), and store everything lossless and at the original resolution. Then let the client (the browser, or perhaps the image viewer in general) decide how many bits of that file it wants to download, depending on the actual rendering target size (something only the client knows), available bandwidth, and perhaps money-saving user configuration (in case you pay per kilobyte, as is often the case for mobile data plans).

The server side (in a web use case) could also decide to abort transfers if it is under very heavy load or something -- e.g. it could only send at most the first 20% of any image file, and it would still be a reasonable degradation.
 
Is FLIF decompression fast enough to be possibly used in a video codec ? That would really show its qualities if it could replace Pro Res 442 for masters.
 
How you've got this? Was it a long term private project, part of your Job or did you woke up one night from a dream and yelled "EUREKA"? :D
 
I'm pretty sure that he slipped and bumped his head while standing on his toilet to hang a picture.
GREAT SCOTT!  ;)

It's a bit of both - a long term private project and somewhat part of my job -- that is, I'm supposed to do computer science research but not exactly on this specific topic...
Nice, I hope it will pay put for you in one or another way. :)

Actually size matters more than ever, because median bandwidth is actually going down for the first time in the history of the internet: for home computers, bandwidth is still going up, but smartphones with relatively low-bandwidth internet connections are compensating for that.


And about 60% of all the bandwidth internet uses, is used for images (jpeg, png or gif). There's a lot of money to be saved and speed to be gained with better image compression, which is probably why Google is trying to push WebP so much.
True. Internet  is stuffed more and more every day. Now that Youtube has Full HD in 60 FPS  we really need some better data compression. And since every Toaster is now on the Web it's time to have more efficient solutions to save the bandwidth, at least a little bit.
 
Last edited by a moderator:
This could be pretty useful for real-time applications like games, even if the decompressor has not been super optimised, if the rendering can look that good for only less than one percent of the data loaded.  Although I guess it wouldn't make sense to keep calling the decompressor when you have partial data like in the video - or does it?  Can the decompressor be interrupted at any point and told 'do me a render of whatever you've got'?  It sounds a little like the algorithm might permit that.
 
amazing work, i swear you just never know from day to day what all of the great minds of this forum will turn out three cheers for you all
 
The thing that I like about this is that it degrades into a lossless codec so perfectly. Like you've said, systems can degrade the file intentionally if the system is overloaded or whatever, but it can also be done at by the artist, like they do the work, store it locally at full spec, but then maybe they publish just the first 20% of it: good enough and saves on hosting costs.

Is FLIF decompression fast enough to be possibly used in a video codec ? That would really show its qualities if it could replace Pro Res 442 for masters.
Only if the algorithm lends itself well to encoding multiple frames. For a static image the compression ratio is amazing, but video codecs are designed to take advantage of multiple similar frames to improve compression up to 20 times (by my napkin math) what we're seeing here.It would be pretty cool if it could work with video: render the video at a "reasonable" quality, discarding 80% of the data so that what's left looks good, doesn't take up a lot of space. And the best part is if the CPU can't keep up it can just stop decoding the frame, immediately move onto the next. Quality suffers but at least you get the smooth video our brains generally prefer.
 
I'm intrigued by this. As a person who dabbles in digital art I'd love to see how this compares to some of the files I've normally used.. PNG's at stuff at 3000 plus pixels on each side tend to get a bit big.
 
Just wondering, what's exactly does the "Free" in your title mean?
The same thing as in "Free Software" in the FSF/GNU sense.

Is FLIF decompression fast enough to be possibly used in a video codec ? That would really show its qualities if it could replace Pro Res 442 for masters.
It could be used in a video codec, just like FFV1. Decompression would probably be a bit slower than FFV1 but not much. Decompression in a lossy way (reading only part of the file) would be much faster than FFV1. The compression rate would most likely be better -- FLIF is better than FFV1 for single frames already, and since the MANIAC trees can be reused over many frames, I expect it FLIF to beat FFV1 easily for video compression. But I still have to implement that ;)

I don't know Pro Res 442, I'll have to look that up.

Cutting off the end of a file has the effect of chromo subsampling, basically. To be a good lossy format, you would probably additionally want to add some quantization -- a good way to do that would be to save only the sign, exponent, and most significant mantissa bits. That way the quantization has minimal visual impact.
 
Not to start posting pngs in your thread, but

FLIF.png

1.57KB (help)

Edit: (license free as in flurg, as in flurg laaaif!)
 
Last edited by a moderator:
Looks and sounds really awesome what you've achieved. Kudos!

I don't know what you plan long-time with this file format, but IF it should be adopted widely, i think staying with (L)GPL will be a problem. I see it totally useful for browsers, but for example chrome/blink/whatever you call it has AFAIK only dependencies to non-GPL code, also Firefox is MPL (though GPL compatible what i read)... so at least, it would be great to have some BSD style licensed decoder library available. but these are just my 2cents. Also for games, i think that format could be used, that you can display a low-resolution version of the image quite fast and then start updating the textures while the game is already running, steadily increasing the resolution of the texture...
 
Back
Top