Free Lossless Image Format (FLIF)


Is it possible to split, or rather combine, animated flifs? That way there is atleast a possibility for tuning thats rather flexible.

Specifying cornercase options into the format seems like added complexion in a rigid way, for dubious known rewards.
 
Is it possible to split, or rather combine, animated flifs? That way there is atleast a possibility for tuning thats rather flexible.

It's possible to split or combine flifs by decoding them and reencoding them.

I'm not sure what you mean by tuning.  If you mean dynamic quality reduction like we've been discussing on this page, then an artificially reduced animated flif should still run for the full duration with the right number of frames, but each frame will have a reduced quality.  I've not tested that yet though.
 
I mean tuning by hand done in coding, like that supposedly saves some bits by predetermining what the user might want.

I think thinking of the how something may come up, is better done by adding flexibility than adding semi-complexion.
 
The client knows the total file size I suppose. Just downloading a fixed percentage of that could be good enough.

It would make sense not to have a change in quality partway through the image though.  If I understand how the interlacing works there should be a series of logical 'stopping points',  one after each level of refinement, so it would make sense to request the exact file size required to reach one of those points if that can be calculated.

-Neelix

Edit:  Come to think of it though,  perhaps a more simple solution would be to download a given amount by percentage,  then only continue to refine the image if the entire next level of refinement is available.

 
 
Last edited by a moderator:
Can we define "author" as a person or a group of people with control over the collection of images in the multi-image-scenario? Instead of image files with integrated negotiation tables, which may not make so much sense, let's have the tables be, say, a separate manifest, or integrated into the image HTML elements.
 

One major missing piece is that you're assuming the author also created all the images.  This is very rarely the case.
You're right, and there's actually a chance for deduplication here that I hadn't considered... of course the author has not necessarily created the images, so optimization would (probably?) not typically happen on a file-by-file basis, but rather be a matter of grouping images into different classes, with one negotiation table per class.

I'm not talking about someone who created all the images, but about someone who created the multi-image-file scenario. That's what I've been having in mind.
Why does it know it need to load the icons fully?  Why does the logo and the avatars get lower priority?  How is this decided and why can't such a decision be automated?  What is gained by having the author intervene to decide what quality levels are appropriate?

This is already being done today in web development, optimization and delivery of image files and graphics in ways such that usability is enhanced and loading times optimized. The difference is that the paradigm is completely different right now, revolving around different file formats, various transmission methods, and such.

To answer your questions from my perspective (please do say something if you think that I'm mistaken about something here):

  • "Why does it know it need to load the icons fully?" The icons in the editor of this forum are important to the function of this website. If they were a bit blurry, then you would have more trouble figuring out the editor, especially if you had never previously seen those icons loaded fully. The alternative would be to manually load each of these when you need them, or to flat-out disable the image quality reduction for the site.
  • "Why does the logo and the avatars get lower priority?" To "reserve" more resources for images embedded into the conversation if the user wants some form of data saving to take place (compared to author-information-agnostic data saving). Most people probably come here primarily for the contents of the conversations, not for the avatars.
  • "How is this decided and why can't such a decision be automated?" Good question. I'm not sure whether I can truly answer this yet. (Are we possibly looking at this from different angles here?) Some things to consider: Such decisions are already being made in web development today, plus: remember that the process of picking a suitable amount of compression if an image is to be saved as a lossy jpeg file isn't automated either (at least not to my knowledge).
  • "What is gained by having the author intervene to decide what quality levels are appropriate?" In data-saving-enabled conditions, potentially way less data has to be transmitted, and waiting times from manual invocation of FLIF file full-loading are reduced or minimized.
     
The client knows the total file size I suppose. Just downloading a fixed percentage of that could be good enough.

...am I the only one here who thinks that this could be a significant waste/inefficiency with FLIF?

I mean, am I the only one to think that we're talking about a potential killer feature here?
 
I didn't ask "why does it need to know" I asked "why does it know".  I already knew it was important to make sure some things are full quality, but you stated that the client would somehow know this, to which I ask "why?"  What is the mechanism you're envisioning that decides what is and is not important?  Does the website creator need to inspect the file and set these quality tiers to 100% so the browser is forced to download the whole thing?  Can this not be automated?
To that I have a very confident "yes", yes it can be automated.  The icons are so small in size and resolution, they can be encoded in a few hundred bytes.  Even the avatars are in the 10-20KB range, not a lot to be gained by reducing the quality any.  A set of metrics can be determined on a client by client basis for figuring out whether it's worthwhile getting the whole file or not.
You see a feature in having the host define recommended quality levels, and failing to do so as a waste.
I see a feature in NOT having the host define recommended quality levels, in allowing the client to make the judgement itself entirely automatically, without having to add to the file format at all, without the possibility of abuse from content creators and ISPs.  To me, giving the author this ability is a risk and needless complexity.
 
FLIF metadata (in the beginning of the stream) could contain a list of prefix sizes for different resolution/quality settings.

Maybe the img tag needs an optional "priority" attribute to help browsers choose the order in which images are downloaded/refined.
 
FLIF metadata (in the beginning of the stream) could contain a list of prefix sizes for different resolution/quality settings.

Maybe the img tag needs an optional "priority" attribute to help browsers choose the order in which images are downloaded/refined.

I think the picture tag is intended to fix that. I would tend to think that an optional list of prefix sizes is better than letting the author try to guess how important an image is - the presence of a prefix list could also be implied to mean that an image is less important and I generally think that marking an image as less important can be beneficial while marking an image as more important is more prone to abuse.

As long as the final FLIF file format supports adding new types of metadata in the future (E.g. like PNG by defining new chunks), there is really no need to try do define everything up front as long as the most common/useful concerns are taken care of.

 
 
Putting something in the HTML is something I can get behind, external and optional: I would expect a single author to write a single web page, if not the entire site, and if a group is working on the site there'll be some kind of communication; you don't generally see websites with pages pulled wholesale from the internet, there's no "open-clipart" equivalent.
I'm still against storing anything unnecessary in the metadata though.
 
This is something from a private conversation that I've had with _wb_ a while ago. Since several people here in this thread have been talking about similar concepts, I'm copying it here with _wb_'s permission
...

Be careful to put the metadata in the proper place. I have a feeling that two separate parts of metadata are being mixed up here.

If I understand correctly (please correct me if I'm wrong) then FLIF decodes an image in several steps. Each step improves the quality of the image. This creates a quality/processing trade-off. In addition, the improvement layers are apparently stored in order such that later improvement layers do not need to be downloaded. The information of how many quality levels there exist in the image as well as which points in the FLIF image each of these layers starts is obviously metadata concerning the image and should be stored in the image itself <edit>(unless it can be computed)</edit>.

I share your idea that this has nice applications on the web. I.e. a browser can set a quality thresholds on images to limit bandwidth and CPU usage for decoding. Whenever it would encounter a FLIF it could download the FLIF header containing the metadata and subsequently download all the improvement layers within the allowed range. What I read in your discussion with _wb_ is that there can be an application-defined distinction between which images are more important and which are less important. This, by itself, has nothing to do with the image and is in fact image independent. As such this metadata should reside in the web application (i.e. the HTML content) and not in the image itself.

On one side you cannot predict and cater to every type of usage of the image in the definition of the image format itself (and yes web is a typical and easily predictable one). On the other hand, tagging image importance is also worthwhile for images which do not support quality levels at all. E.g. (assuming a binary quality scale) you could envision that avatar images are all flagged as optional on the forums and your browser provides a setting "download only non-optional images". This works with any image format (and can easily scale to relative importance levels e.g. 255 levels).

To summarize, it seems to me that the quality hints you describe belong in HTML and/or CSS, not in FLIF itself.
 
Last edited by a moderator:
Makes me think, do you think that FLIF progressive decoding could be useful for textures?
You have only one resource for each texture, on low hardware/low resolution you can just handle the first bytes and get a low resolution texture, while on higher hardware/resolution you could use the full image...

I don't really know the specifics of the algorithm but it's probably not a good fit for texture compression for the same reason PNG and JPEG aren't. For texture compression to work well it has to be possible to cheaply randomly access fairly small blocks and those blocks have to be contiguous in memory. The main goal isn't really to reduce overall memory footprint but to reduce the number of fetches on the bus.
 
Makes me think, do you think that FLIF progressive decoding could be useful for textures?You have only one resource for each texture, on low hardware/low resolution you can just handle the first bytes and get a low resolution texture, while on higher hardware/resolution you could use the full image...

I don't really know the specifics of the algorithm but it's probably not a good fit for texture compression for the same reason PNG and JPEG aren't. For texture compression to work well it has to be possible to cheaply randomly access fairly small blocks and those blocks have to be contiguous in memory. The main goal isn't really to reduce overall memory footprint but to reduce the number of fetches on the bus.
Yes, FLIF would not be suitable at all as an in-memory texture format. It might be suitable as an on-disk texture format though.
 
It compresses better than PNG, lossless JPEG 2000, and Google's WebP format in lossless mode.

Yeah, but does FLIF support DRM?
6837.png

http://www.bbc.co.uk/news/technology-34538705

JPeg pictures could soon have built-in restrictions making them harder to copy, if recommendations by the body overseeing the format are implemented.
 
Last edited by a moderator:
I do love the way several FOSS programs have a little tickbox in thier config dialogue labelled "obey DRM limitations", left unticked by default.
 
Makes me think, do you think that FLIF progressive decoding could be useful for textures?
You have only one resource for each texture, on low hardware/low resolution you can just handle the first bytes and get a low resolution texture, while on higher hardware/resolution you could use the full image...

I don't really know the specifics of the algorithm but it's probably not a good fit for texture compression for the same reason PNG and JPEG aren't. For texture compression to work well it has to be possible to cheaply randomly access fairly small blocks and those blocks have to be contiguous in memory. The main goal isn't really to reduce overall memory footprint but to reduce the number of fetches on the bus.
Yes, FLIF would not be suitable at all as an in-memory texture format. It might be suitable as an on-disk texture format though.
Usually 3d games use compressed textures like DXT/BC/PVRTC/ETC2. These are all lossy but fixed size compression methods. More information there. http://www.reedbeta.com/blog/2012/02/12/understanding-bcn-texture-compression-formats/

So basically BC1 format have 4x4 blocks that each has two endpoints stored in rgb565. So total of 32bits of color and then 2bits index per texel. It would be really intestesting to see how well FLIF could compress these kinds of textures. This could be great on-disk format for compressed textures. Generating mipmap on fly would also be neat feature but most of the time you want full control for filtering at mipmap generation. 
 
Back
Top