Free Lossless Image Format (FLIF)


Quote said:
The challenge will be to add this "partial download" functionality to browsers. It's a feature they don't have yet at the moment (even though theoretically, it would make sense also for Progressive JPEGs and Adam7 interlaced PNGs -- I didn't invent progressive decoding, I only improved it slightly), and it would be of crucial importance to make the "Responsive By Design" idea of FLIF work at all.
Actually I have allready created a Feature-Request for Chrom(ium).  :) https://code.google.com/p/chromium/issues/detail?id=539120So this challenge is already accepted  ;) Best regardsRuben
 
This is something from a private conversation that I've had with _wb_ a while ago. Since several people here in this thread have been talking about similar concepts, I'm copying it here with _wb_'s permission.

-----
Have you thought about building in something like an author-client quality level translation table? With such a table located early in a FLIF file, the client software could know which "quality" (== percentage of loading of the file) the file's author recommends for the user's/client's current situation. This method could be used for example to give webbrowser users a browser-global "Bandwidth zoom" slider, with each image file being loaded with respect to what that file's author would consider to be an appropriate crispness for the given slider value.

If, for the sake of example, a FLIF file's quality range could be 0 - 255 (where higher value means better image quality), then for some images it may only really make sense to load, say, a 200 quality, even if the user otherwise was more in a 64 situation. If users have to repeatedly initiate downloads for forced high image quality, for anything that they want to see in full detail at < 255 bandwidth zoom, then they will grow tired of it quickly no matter how easy the process, and consider the experience a "lesser" version of the internet surfing experience, which is a very big deal even on limiting mobile connections. There are many related problems: users can feel uneasy because they can never really know whether they would want to see the 255 quality of an image unless they have at least seen it once, then there's disappointment when the loading wasn't necessary after all, there may be many small image files which repeat across a website's pages that would be too bothersome to manually load at high(er) quality each, and so on.

[...]

The important aspect is that in scenarios where multiple image files by the same author/authority are handled "simultaneously", due to the rather fluid nature of FLIF it would be nice if the client would be able to make on-the-fly negotiations with the image author/authority, who knows which files in the multiple-image-file scenario are important or not.

At first I was thinking about something like a "target quality hint" too, but I think that that would be too inflexible, at least if it's supposed to suit FLIF. Consider a scenario where the creator(s) of a website create a page where pinch-to-zoom is an essential feature. Then they could use the target quality hint to suggest that, say, the big image in the middle should be loaded fully, because they aready know that it is most likely going to be zoomed. However, how is the client going to interpret this hint? How does it know the importance of the hint in the context of, say, a reduced loading due to the screen being small? What about other client environment factors such as available bandwidth, data caps, etc.? The simple "target quality hint" would not allow the client to use FLIF's uniqueness as efficiently as possible, because it wouldn't be able to tell how the hint is to be interpreted in relation to the circumstances of the client's environment. All that the client would know would be that the "optimal" quality of an image file would be whatever the hint says. So, it would be nice if the author could tell the client something to help the client decide how much of each image file to load in various situations, thereby using FLIF to the fullest.

To add detailed author information for every possible client situation would probably be a mess, but luckily there is a common denominator: it (most?) always comes down to bandwidth in one form or another. So, while the author can not give the client detailed information about every image, if the client could give the author information about its current situation/environment (which basically boils down to cost of transmissions), then the author could look at that situation, and respond with a recommended loading percentage for each image file that is a specific recommendation for the client's current situation.

Consider another scenario where a mobile device loads images fully on a good LTE connection, but when only 3G is available, it could say: "Okay", the author(s) say I could first save on that image there, leave these here alone, and save on the others as I see fit." If only EDGE is available, it could say: "Maybe I should go into the territoy where the image author(s) say I can safely shave off almost everything on these images here, just a bit on this one here, and the user experience should remain acceptable for the circumstances."
 
Last edited by a moderator:
That as may be, but I'll add that in my experiments of a 6MP source file, 100kB still produced a detailed reproduction (out of a 7.1MB starting flif file).  100kB would be a bit much on an old pre-ADSL 56kbps modem line, but should be sub 0.5s on even an old 2.5G GPRS connection, and to my eye would be fine as a scaled picture in a webpage on my PC.

Presumably there's some size at which it's not even been able to do a first pass, which would result in a partial image, but even at those unusably high resolutions I've not found it yet.

So it seems to me an initial limit could be set as a number of kilobytes, and you'd have tighter control over your bandwith that way, rather that leaving it to the author to specify how many MB of your bandwidth he can use.
 
So it seems to me an initial limit could be set as a number of kilobytes, and you'd have tighter control over your bandwith that way, rather that leaving it to the author to specify how many MB of your bandwidth he can use.
...I think you misunderstood. The author would recommend sets of quality levels for (almost) any/all situations/connections/etc. that the client/user could possibly encounter, by breaking these situations down to a number of "bandwidth" tiers/levels, and suggesting a certain amount of loading of each involved FLIF file for any of these bandwidth scenarios. The user (manually) and/or client software (automatically) can then pick any of these tiers/levels, and load image files appropriately, based on what the author of this multi-image-file scenario would be recommending /for the user's/client's _CURRENT_ bandwith situation/.

In other words, the client/user decides about the quality, not the author. The author merely responds to the client's/user's setting with a recommendation.

Look at it this way: If "the author" knows which image files of a multi-image-file scenario are most important in which situations (based on what can be seen in these images), and the user/client does _NOT_ know it, then efficiency actually /decreases/ if you throw this information away. So it's actually not about whether or not this information should be used, but rather about using it in such a way that it's not /exclusively/ the image author(s) who have a say. Thus, negotiation.
 

100kB would be a bit much on an old pre-ADSL 56kbps modem line, but should be sub 0.5s on even an old 2.5G GPRS connection,

The latency of GPRS can easily exceed 500ms, or even 1000ms, and that's assuming a "stable" connection. In effect, this comparison doesn't make a lot of sense. See also: Optimizing HTTP: Keep-alive and Pipelining.

Not to mention, TCP starts slow.
 
Last edited by a moderator:
That as may be, but I'll add that in my experiments of a 6MP source file, 100kB still produced a detailed reproduction (out of a 7.1MB starting flif file).  100kB would be a bit much on an old pre-ADSL 56kbps modem line, but should be sub 0.5s on even an old 2.5G GPRS connection, and to my eye would be fine as a scaled picture in a webpage on my PC.

Presumably there's some size at which it's not even been able to do a first pass, which would result in a partial image, but even at those unusably high resolutions I've not found it yet.

So it seems to me an initial limit could be set as a number of kilobytes, and you'd have tighter control over your bandwith that way, rather that leaving it to the author to specify how many MB of your bandwidth he can use.

The first pass is just 1 pixel. The second pass is 2 pixels. The third pass is 4 pixels. The fourth pass is 8 pixels. And so on. The first 12 passes give you a 64x64 image (less if the aspect ratio isn't square). This usually only takes a couple of KB. 
 
FWIW I'm with Levi. If there were an objective metric, it might be ok, but letting web authors specify their preference is just a recipe for disaster.
 
FWIW I'm with Levi. If there were an objective metric, it might be ok, but letting web authors specify their preference is just a recipe for disaster.

I'm not seeing the disaster here, could you explain?
 
No great disaster as I see it.  Just that the author of the image has no knowledge of my bandwidth requirements.  Indeed they're often not in the best position to denote how much of the image to download for initial purposes; I'd guess their requirements for a first pass would be enough that you can see some of the detail they've purposely put into the image, while my requirements would just be enough to see generally what it is, to see if I want to see more of it.

On the other hand, I've found during experimentation I can find a file size that results in a usable picture even with high-resolution inputs.  Granted, that would download rather more than is necessary for a smaller resolution file by my quality metrics, but it would be easier for me to control if I knew that every ten pictures on a web page was 1MB of my usage without expanding them.

It's possible that all authors would have my best interests at heart, and only specify 100kB for their highest-resolution, most detailed images at minimum quality, with smaller/more simpler images having smaller minimum filesizes, and while that will save me some money, it makes it harder for me to keep track of it.  And if latency is an issue as you suggest, it seems to me that as you get much below 0.5s the proportion of the loading time that's due to connection latency will only increase below that point.

If speed is the main concern on a mobile connection, then presumably you might want a larger minimum filesize on a faster connection such as HSPA.

It seems to me, to summarise, that I'm in a much better position than the author of the image to decide how much bandwidth and loading time I want to spend on their image.  It may still be worth spending some time on allowing authors to encode a recommendation in the metadata, I just don't think it's worth many bytes, and much time finessing its implementation, if it's hardly ever going to be used.
 
Last edited by a moderator:
FWIW I'm with Levi. If there were an objective metric, it might be ok, but letting web authors specify their preference is just a recipe for disaster.

I'm not seeing the disaster here, could you explain?
The client gets the file with author suggestions on the amount of data that "looks good" but what if the author is an ass, or doesn't know shit?  He could just say all tiers need the full file to look good enough, because he's pretentious and demands that everyone see his image in full HD glory.  The client might be a tiny smartwatch and therefore request the lowest quality file but ultimately downloads the full 12MB file anyway because the author said so.
If such tiers were to be defined there'd need to be some kind of hard metric, a standard that everyone accepts and instead of the author deciding what levels are acceptable the software figures out what percentage of the file is required to meet the minimum requirements of each tier.
Or we could just ballpark it based on image vs screen resolution and file size and leave it entirely up to the client.
 
Levi and WizardStan put it much better then me, so I will not post my rant about ignorant content providers. I'll just add that to me it is a much better to focus on interesting images and configure an exception for my general download length setting (it could be a simple mouse gesture or - with headtracking - just looking at the image for some time) than having to deal with uninteresting images that use undesirable amounts of bandwidth (of which the latter is not even obvious unless I open the debug tools).
 
So what, can't the client software decide loading based on the cumulative file size of all the FLIF files on the current website? Instead of directly going down to a specific lower tier, it can just pick a tier such that the overall FLIF files data that will be loaded will be at the desired percentage of the thoretical overall full FLIF data size of the current website (and/or below a certain fixed amount).
 

[...] what if the author is an ass, or doesn't know shit?

...oh, /that!/ That's always a problem though, just look at some of the javascript thingamajigs, div soups, too too too too overqualified CSS, and so on of today, or remember the hell pages of the olden times.

If we assume a pretentious author, then that author could just provide images without interlacing in any case. On the other hand, more and more website authors understand that, for example, loading times tend to correlate with sales, and that even in mobile situations, "lesser" versions of the internet are not usually being generally well-accepted.


PS: The screen size adjustment can happen exclusively on the client side with FLIF in any case, and it is (or can be) "lossless". If I understood this correctly, then other, possibly loss-inducing heuristics can just be factored in at that point. Can you confirm this, _wb_?
 
So what, can't the client software decide loading based on the cumulative file size of all the FLIF files on the current website? Instead of directly going down to a specific lower tier, it can just pick a tier such that the overall FLIF files data that will be loaded will be at the desired percentage of the thoretical overall full FLIF data size of the current website (and/or below a certain fixed amount).

If the client is selecting a tier based on desired percentage, why not have the client just load that much data?  The client's going to do the math to figure out if a tier value makes sense and then choose to accept it or just use its own heuristic which, assuming everything is kosher, should be about the same as what the author has suggested for the appropriate tier anyway, so what is actually gained by having the author arbitrarily choose a value?  I posit that there's no gain, the client is ultimately going to do its own thing anyway.
 
If the client is selecting a tier based on desired percentage, why not have the client just load that much data?  The client's going to do the math to figure out if a tier value makes sense and then choose to accept it or just use its own heuristic which, assuming everything is kosher, should be about the same as what the author has suggested for the appropriate tier anyway, so what is actually gained by having the author arbitrarily choose a value?  I posit that there's no gain, the client is ultimately going to do its own thing anyway.

Hm, maybe I haven't made this clear, but if there's a quality negotiation table for every FLIF file, then of course that means that they can all be different in a multi-image scenario.

Basically, to stay in the examle of a browser loading a webpage, instead of using one amount of loading reduction across the board, in order to meet the reduction goal the browser can use the differences between all the FLIF file tables to selectively reduce image qualities, depending on which images the author of this multi-image-file scenario (of the webpage) suggests as being important or unimportant. For example, if the browser wants to load 10% of the FLIF files, then it could know, say, that all these little icons at the top of this editor textbox on these forums here should be loaded fully, and the rest spent equally on the other image files. However, at 50%, the browser could know that, say, the editor icons should be loaded fully, the website logo at the top and the avatars at 20%, and the rest can be spent at will, such as on any images that may be included in posts.

...or am I missing something here that I'm not seeing right now?
 
Last edited by a moderator:
One major missing piece is that you're assuming the author also created all the images.  This is very rarely the case.  It assumes the author is sufficiently intelligent to know what is important and what is unimportant.  This is very rarely the case.  It assumes the host knows enough about the client to decide what is "good enough".  This is very rarely the case.
Why does it know it need to load the icons fully?  Why does the logo and the avatars get lower priority?  How is this decided and why can't such a decision be automated?  What is gained by having the author intervene to decide what quality levels are appropriate?
 
Again ninjaed by Wizardstan:)

The author doesn't know what is important/unimportant or more specifically may not have the same priorities as the user, especiall with meshed content from different hosts (ads are probably the most obvious case, but forum posts have the same problems), making the quality negotiation tables at least untrusted if not totally useless. Devising a viable algorithm working on this untrusted input is AFAICS much more difficult and error prone than giving the user a simple knob and perhaps an override for individual images.

edit2: Thinking about trust. Signed images may have benefits.
 
Last edited by a moderator:
As I've understood Christoph's argument, the FLIF files would simply have a table of qualites such as 'if you want x% load y bytes of this image'.  The website then specifies what percentages different images should have.  So there are two authors - the author of the image and the author of the webpage.  Malicious adverts could still scupper things by saying that even 1% of the image requires the full thing, but I'd hope most adverts aren't deliberately trying to break the website they're shown with (even if that happens depressingly often in practice).

I'll just mention that the recent practice of some websites to display enormous but low resolution (albeit bilinearly filtered, somehow) images if I dare turn off javascript to visit them, because presumably they assume I'm visiting on some small screen device over a slow/costly connection.  I suppose that could be construed as the webpage author mistakenly deciding what bandwidth I should consume.
 
As I've understood Christoph's argument, the FLIF files would simply have a table of qualites such as 'if you want x% load y bytes of this image'.  The website then specifies what percentages different images should have.  So there are two authors - the author of the image and the author of the webpage.  Malicious adverts could still scupper things by saying that even 1% of the image requires the full thing, but I'd hope most adverts aren't deliberately trying to break the website they're shown with (even if that happens depressingly often in practice).
 

Why would the author need to be involved in that?  Surely a lookup table translating a selection of quality levels to their equivalent download size could be calculated automatically when the file is generated?

-Neelix
 
And if it can be calculated automatically on the server side then it can be estimated on the client side.
 
Back
Top