Free Lossless Image Format (FLIF)


Notaz, Exophase, _wb_, ptitSeb, etc... the list goes on and on.

Why does this community have so many brilliant software wizards?

It really baffles me. I mean, this is the very top of the class material here.

If they would start a company together...
 
Last edited by a moderator:
Both should be fixed now.
Thanks, it builds fine now with only a few implicit typecast warnings.

I just tried it out on a 6MP photo, for curiosity's sake.  Interesting seeing the progress percentage go negative on that!  Final results are that the flif file is 15% smaller than the png (almost four times bigger than the lossy jpeg I usually use for publishing or course).  It's actually only 35% bigger than the source raw file, which AIUI will contain roughly the same number of pixels, but only one colour per pixel at 12 bits resolution - that's half the bits per pixel that the flif file is encoding.

I also tried it out on some hase sprites I designed.  The cow ends up at 1.5k rather than 1.3k (+15%), but the worm with a hat shrinks from 926 bytes to 302 bytes (-66%!)

That paragraph on the website is now fine too.  It still refers to the existing codecs as being the best for specific purposes (which I'd argue they no longer are), but the phrasology is simpler now so people should more easily see what you are getting at.
 
Both should be fixed now.
Thanks, it builds fine now with only a few implicit typecast warnings.

I just tried it out on a 6MP photo, for curiosity's sake.  Interesting seeing the progress percentage go negative on that!  Final results are that the flif file is 15% smaller than the png (almost four times bigger than the lossy jpeg I usually use for publishing or course).  It's actually only 35% bigger than the source raw file, which AIUI will contain roughly the same number of pixels, but only one colour per pixel at 12 bits resolution - that's half the bits per pixel that the flif file is encoding.

I also tried it out on some hase sprites I designed.  The cow ends up at 1.5k rather than 1.3k (+15%), but the worm with a hat shrinks from 926 bytes to 302 bytes (-66%!)

That paragraph on the website is now fine too.  It still refers to the existing codecs as being the best for specific purposes (which I'd argue they no longer are), but the phrasology is simpler now so people should more easily see what you are getting at.
Fixed the negative progress percentage.

Could you send me that source image with 12 bits per pixel? (or a similar image)

Usually "flif -ni" produces smaller files, but you lose the progressive interlacing. But those hase sprites are probably rather small, in that case flif will not interlace anyway.
 
OK, so this FLIF thing gave me a thought.

NASA generates these absolutely HUGE composite images.  They're amazing.  They take forever to download - and take forever to load into an editor, etc...  BUT, you can zoom in through them as if flying through space.  Insane levels of detail.

The way that FlIF resolves images...

Could:

a page be created that hosts gigantic images

a engine on the page detects the browser window size

the image from the host site is supplied to the browser, showing only the resolution that the window can use, with scroll and zoom controls

i.e. eliminate client-side scaling
 
Hi all :)

I'm actually playing with that huge PNG file:

time flif Australia_Present_Vegetation_Map.png Australia_Present_Vegetation_Map.flif

Input channels: [0] 8 bpp [1] 8 bpp [2] 8 bpp
Transforms: YIQ, BND[0:42..255][1:129..471][2:170..382]
Learning a MANIAC tree. Iterating 3 times.
Header + rough data: 2616 bytes.  MANIAC tree: 24737 bytes.
Encoding done, 5896161 bytes for 1988x2228 pixels (1.3312bpp)

real    2m2.359s
user    2m1.520s
sys    0m0.252s

ls -altr

-rw-rw-r--   1 7822838 sept. 15 17:55 Australia_Present_Vegetation_Map.png

-rw-rw-r--   1 5896169 sept. 15 18:04 Australia_Present_Vegetation_Map.flif
Impressive :D

Cheers, Magic Sam
 
Dude, you are making me feel really bad about myself here. :p

Here you are, on the cusp of fundamentally changing the way images work on the internet, and I'm clicking on kittens.

But soon you will be able to view kittens faster, more efficiently and with greater fidelity. Huzzah!
I for one welcome our new high fidelity kitty overlords...
 
woah, huge flash thing

giant_flash.png
 
woah, huge flash thing

I noticed, that the new board version maximizes the video plugin to full width. It seems to be a bit of an "overkill" to me too.
 
Last edited by a moderator:
Thanks _wb_. Nice comparison. On this particular test image lossy BPG is quite amazing. FLIF also has the right colors and works wonderfully in the downscaling (aka 'thumbnail) mode. It only lacks the resolution. So I'm wondering whether some kind of postprocessing of FLIF to smooth the edges could be appropriate for the very first iterations on a partial image when not down scaling.

One other point concerning the user agent integration of progressive decoding you outlined on your page. I'm not an expert on HTTP, but I don't think aborting the download is the right thing to do. It is probably better to make use of the HTTP range get feature and have the user agent request only the first 16K or something on first access and then retrieve the rest right away (if it isn't a format that can be progressively decode like FLIF) or after everything else has been (partially in the case of FLIFs) loaded when appropriate for the current screen resolution. Getting the first part of every resource early could be a win anyway, because with most (all?) formats computing the original size will be possible allowing the rendering engine to make smarter decisions.
 
Yes, maybe "aborting the download" is not the correct way to put it, what I really meant was to stop asking for more bytes, however that is implemented in the underlying transfer protocols. This applies not just to HTTP and web browsers, it could also be an image viewer reading bytes from an input file, which can be located on a local or remote filesystem.

I tweaked the file header a bit, which of course breaks compatibility but I don't care (yet) about that. It saves a couple of bytes, and more importantly, it is now easier for other programs to identify FLIF files.

The header is now as follows:
The first 4 bytes are, you guessed it right, "FLIF".
The next 2 bytes encode the traversal method, the number of channels, and the bit depth.
For example, "31" means non-interlaced, 3 channels (RGB), 1 bytes (8 bits) per channel.
"42" means non-interlaced, 4 channels (RGBA), 2 bytes (16 bits) per channel.
"A1" means interlaced, 1 channel (greyscale), 1 byte per pixel.

Most FLIF files would start with "FLIFC1" (8 bit RGB) or "FLIFD1" (8 bit RGBA).

Then the next 4 bytes encode the image dimensions as two unsigned little-endian 16 bit ints (width, height).

To decode the rest of the header is a bit trickier and requires an implementation of MANIAC. But at least you can get the most important image info from the first few bytes.
 
 
So that means FLIF files are a maxium of about 65k pixels across and down? I suspect you'll be safe from digital cameras for a while as a square image of those dimensions is about 4.3 Gigapixels, but I'd guess some composite images could approach that size.  I don't know what the limits are on other image file formats, and that might stop commonly available images exceeding sizes less than that, but I don't generally think it's a good idea to encode limits to something that's historically always risen.

But of course, you can always put out another version of the file with extensible limits or just increased limits, when people need them, if you know that people need them in advance.
 
Yes, 65535x65535 is the maximum image resolution. If this ever becomes a problem, then it is easily solved by handling 0x0000 as a special size marker (which would be a 0x0 image, which does not make sense anyway), followed by the actual size as two 32 bit ints, or something like that.

At the moment, I don't think it's a real limitation. For example, FLIF's biggest competition, Google's WebP, "uses 14 bits for width and height. The maximum pixel dimensions of a WebP image is 16383 x 16383." So that would be about 0.25 gigapixels.

Note that a 4 gigapixel image stored as uncompressed RGBA requires 16 gigabytes of RAM. The current implementation of FLIF uses 16 bits per channel if the input is 8 bit RGB, because it works in a 9-bit YIQ color space and needs 10-bit integers to deal with pixel differences. FLIF stores the entire image in memory during encoding and decoding, so that would take 32 gigabytes (plus some extra for the MANIAC trees etc, but that would be negligible).

For images that large, something smarter would have to be done anyway if you want to display them efficiently. FLIF could work well to show you a "fit-to-screen" image, even if the total image is that huge. But if you want to zoom in on specific regions, it would need to decode the details of the entire image, even if you're only interested in part of it. So it would probably be better to cut the image up in tiles and encode them as separate files, so if you zoom on in some part, only the relevant tile(s) has to be decoded.
 
 FLIF stores the entire image in memory during encoding and decoding, so that would take 32 gigabytes (plus some extra for the MANIAC trees etc, but that would be negligible).

I wondered about then when I was fiddling with ~5GB camera files.  I only have 2GB on this machine, and don't have any swap enabled according to /proc/swaps.  But it worked anyway, and the flif files are recoverable to practically identical png files.
 
 FLIF stores the entire image in memory during encoding and decoding, so that would take 32 gigabytes (plus some extra for the MANIAC trees etc, but that would be negligible).

I wondered about then when I was fiddling with ~5GB camera files.  I only have 2GB on this machine, and don't have any swap enabled according to /proc/swaps.  But it worked anyway, and the flif files are recoverable to practically identical png files.
I think you mean ~5MB camera files. If have never seen a camera that produces 5GB files :)
 
Back
Top