Notes on Adaptive Images (yet again!)

All the cool kids are doing responsive design, it seems. This means fluid designs, having some hot media query action to reformat your layout depending on screen size, and ensuring your images are flexible so they don’t break out of container, generally by setting them to {max-width:100%;}.

Having images scaling down presents a problem though, if you’re a performance-minded sort of chap(ette). Why send a 300K 400 x 800 image that would look lovely on tablet device attached to wifi, but which takes ages to download on a 3G phone, and which gets resized by the browser to fit a 320px wide screen? Not only are you wasting your user’s bandwidth and time, but not every browser is created equal and not every resize algorithm makes pleasing end results. The CSS 4(!) image-rendering property can help with this but it only hints to the browser.

Sending the right-sized image to devices without wasting bandwidth is one of the knottiest problems in cross-device and responsive design at the moment. In the 24ways advent calendar series of articles, the subject has been discussed twice in eight articles (by Matt Wilcox and Jake Archibald). There are numerous other techniques, as well, such as tinySrc and the Filament Group’s Responsive Images.

All these are very clever, different solutions to solve the same problem, and they all rely on scripts, or cookies, or server-side cleverness or (in the case of Jake’s ideas) dirty hacks and spacer GIFs. I’m reminded of Image Replacement techniques from more than 6 years ago, which were over-engineered solutions to the problem better solved by CSS web fonts.

Let’s recap. We have several images, of different sizes, and want to send only the most appropriately-sized image to the browser. The circumstances differ. The canonical use case is to send smaller, lower-resolution images to smaller screen-sizes on the assumption that connection speed is slow and they have low-resolution displays. This isn’t the case, though. Some people are using retina displays on fast wifi networks. SO, while currently CSS Media Queries allow us to detect screen width and pixel density, we need new media features such as network speed/ bandwidth.

The DAP is working on a Network Information API, there’s a Phonegap version for native apps, and a Modernizr detect, but using script for this seems much harder than being able to access it via Media Queries, and if you just want to rearrange CSS layout, CSS is the place to detect it. (Purists may argue that network connection isn’t a characteristic of device, but in your face, purists!)

Once you have a media query, you can swap images in and out using the CSS content property:

<img id=thingy src=picture.png alt="a mankini">
@media all and (max-width:600px) {
 #thingy {content: url(medium-res.png);}

@media all and (max-width:320px) {
 #thingy {content: url(low-res.png);}

@media all and (network-speed:3g) {
 #thingy {content: attr(alt);}

A browser that doesn’t support Media Queries, or doesn’t report “true” for any of the Media Queries shows the picture.png, which is the src of the img. A browser that is less than 600px replaces picture.png with medium-res.png. A 320px or narrower browser replaces picture.png with low-res.png. A browser that is only connected via 3g replaces the image with its alt text.

I first researched this technique in 2008 as a way of doing image replacement without extra markup (ironically enough). The first two media queries only works in Opera and Chrome at the moment, as they’re the only browsers that fully support content without :before or :after. (The network-speed media query works nowhere as I just made it up).

Recently, Nicolas Gallagher experimented with generated content for responsive images, and unfortunately discovered that there are no advantages to the technique because browsers always download picture.png, even if they never show it because they immediately replace it. Perhaps this could be optimised away by browsers, but there would still be the double-download problem with current browsers.

My mind turned to HTML5 video. Here we have an element that has the ability to specify multiple sources (because of different codecs) and can also switch sources according to media characteristics. It borrows syntax from Media Queries but puts them in the HTML:

<source src=high-res.webm media="min-width:800px"> 
<source src=low-res.webm> 
<!-- fallback content --> 

I firmly believe that if something as new-fangled as video can be responsive in a declarative manner (without needing script, APIs, cookies or server magic) then responsive images should be similarly simple.

Previously, on the mailing list, a new <picture> element was proposed. I mashed up that idea with the already-proven video code above and came up with a strawman:

<picture alt="angry pirate">
<source src=hires.png media="min-width:800px">
<source src=midres.png media="network-speed:3g">
<source src=lores.png>
   <!-- fallback for browsers without support -->
   <img src=midres.png alt="angry pirate"> 

This would show a different source image for different situations. Old (=current) browsers get the fallback img element. As I said, this is a strawman; it ain’t pretty. But it would work. (It’s discussed in my recent Smashing Magazine article HTML5 Semantics.)

I prefer the media queries to be in the HTML for two reasons: you don’t need to have ids or complex selectors to target a particular image, and (more importantly) many content authors use a CMS and have no ability to edit the CSS. (Although the nasty <style scoped> could solve this.)

On the other hand, I might be over-engineering the whole problem. I chatted informally with my colleague Anne van Kesteren, glamorous Dutch WHATWG inner circle member. There’s a school of thought that says everything will be 300ppi and networks will be fast enough, so this is really an intermediate problem until everyone starts using highres graphics and all displays go from 150 to 300. Standards are long term, and by the time we have a standardised version, the problem might have gone away.

What do you think?

(Matt Machell pithily pointed out “if only browsers hadn’t forgotten the old school lowsrc attribute for images“.)

Looks like our chums at WHATWG are discussing this too.

79 Responses to “ Notes on Adaptive Images (yet again!) ”

Comment by Jake Archibald

Quick correction, my final solution doesn’t use spacer gifs, only the image you want for that device gets downloaded. Not that that makes it any less dirty.

I’d love for this problem to be solved in CSS, either via something like content: or by setting display:none on particular &lt;source&gt; elements. Problem is, browsers download those bits of content regardless of CSS, and making that change now would be a backwards incompatible mess.

I don’t agree that this problem is going away, we still have emerging markets that will have this problem for a long time. Also, a new element such as picture would let us add extra native features, such as an attribute to say “don’t download this until its scrolled into view”.

Comment by Nicolas Gallagher

While picture would work to emulate existing responsive image solutions, it still uses viewport/device width as a proxy for bandwidth. However, it is an approach that a few of us also concluded would be worth the WG looking at and adapting as necessary.

Mat Marquis (Wilto) had a very similar conversion with Anne the other day, and I believe he’s putting together an email with the thoughts of various people and some data to present to the WG.

However, we’re still talking about bandwidth concerns today, despite some countries having insanely fast connection speeds. Even if our western infrastructure rapidly reaches a point where mobiles can access the internet from anyway at high speed, we’ve still got huge populations in the world on very slow connections. So it doesn’t seem ridiculous to think that highly disparate connection speeds will “always” be an issue, especially since the size of assets that we send only seems to increase as bandwidth does. So today’s “fast” will be tomorrow’s “slow”, which still leaves us with the headache of adapting website assets to meet the context of the visitor.

Comment by Nicolas Gallagher

Jake said:

Also, a new element such as picture would let us add extra native features, such as an attribute to say “don’t download this until its scrolled into view”.

I also floated this idea in a conversation with @grigs, @wilto, and others. Something like delay to prevent the immediate downloading of an asset like an image, giving you time to switch out the content with CSS or do whatever you’d like to do. Not sure if something like that has already been run past WG members, but would be interesting to know what the potential problems that would be.

Comment by Ian Devlin

I like the idea of a picture element as you’ve described. And of course Necolas’ comment makes loads of sense with regards faster speeds meaning we’ll just use bigger images which will still affect those on slower speeds, whatever “slower” happens to mean at that time.

Comment by Bruce


“While picture would work to emulate existing responsive image solutions, it still uses viewport/device width as a proxy for bandwidth.”

I just changed the strawman code to indicate that it could include my other media query strawman network-speed. (HTML5 video media attribute merely has to be a valid CSS MQ, so if it’s added to CSS it’s legal in HTML5, too)

Comment by David

The use case is “client wants the image to display it using these dimensions”.

So if you don’t like the server to decide which one is the best (with a HTTP header), why not specify alternative images in the CSS (yes, because it’s only a presentation problem. No need to pollute the semantic HTML for that).

Something like :

@imgalt picture.png {

Don’t mix that problem with media queries. It has nothing to do with it. Well it’s related as media-queries let you specify the width and height of images. But determining which resource is more appropriate for these dimensions is another problem.

Comment by Gunnar Bittersmann

Should #thingy {content: url(â¦);} work in browsers despite img being an empty element, i.e. it must not have any content? That means no CSS generated content either.
However, it works in Opera and Safari, but not in Firefox and IE (display @src image umimpressed by CSS), not in Chrome (displays neither image).

Comment by MicroAngelo

I definitely agree with you about the tendency of us web developers to over-engineer very clever HTML/CSS/JS solutions to problems that really should be solved by the browser or the server… we need to learn to pick our own fights!

In this case (and really I think in the video case too) we should be looking at a fifteen year old solution to this problem: the HTTP/1.1 specification defines “content negotiation” where by the browser and the server negotiate how best to serve content appropriate to the capabilities of the viewing device. It was originally designed to help serve alternative fallback images to browsers that didn’t understand these newfangled formats such as jpeg, gif and png.

Of course, as we all remember with horror, Microsoft managed to effectively kill it off by very cleverly having IE lie about what it could accept and understand (remember transparent png files?) and so it’s fallen out of use: I say let’s bring it back!

Apache supports it well: multiviews runs using it. We just need to think up some way of storing different res images and referencing them with the same URI.

Keep the markup simple and easy to read, and let the server do the heavy lifting: this is a problem for HTTP to solve, not HTML!

Comment by Matt Wilcox

I actually far prefer your original solution to the CSS one you’ve talked about here.

The thing here is to remember that embeded images are not the domain of CSS. CSS is about presenting existing semantic content. It is decorative. It is not, and should never be, used to insert actual semantic content that doesn’t already exist. That’s why the use of the “content” property is so warned about.

In addition to the theological problem I have with your CSS method there is a practical problem: that’s of no use at all to anyone who’s putting content in through a CMS. How are the alternative images going to make their way into the CSS? Do we *realy* want to put <style> inside the head of a page just to control which image is sent?

With this method you also loose one of the advantages of the <picture> method: that you can supply semantically different content if needed, along with different alt tags etc. There will be use cases for this. For example, a large screen allows you to show a large screenshot of a given talking point. A small screen does not, so on a small screen you may wish to send not just a re-scaled image, but a completely different one that is more heavily cropped.

As for the idea that this is a temporary problem: I agree. But then to be of any real use you need to define “temporary”, and I think we’re talking years not months. Many years. Let’s not also forget that smaller devices that connect to the net are also far less powerful, and the performance trickle-through that time always hands things will take longer to reach those devices. This is a problem worth addressing.

Comment by Matt Wilcox

Actually, that picture example code isn’t really good enough. It’s not just the source that may need to adapt, but the alt and title attributes too. This is an area where it must be HTML, no server-side solution is going to be able to do this well.

Comment by Nicolas Hoizey

I agree with Matt that alternative content images should not be set in the CSS, it would need dynamic CSS generation in CMS.

I also hope “3g” will never be used as a speed indicator, I sometimes have a better speed on good edge than on bad 3g. 3g can also be much better than WiFi over ADSL, and so on…

Comment by Henri Sivonen

The picture element might fly.

Swapping the image via CSS would cause browsers to fetch two images, because most sites won’t use the CSS trick anyway, so it will be reasonable for browsers to continue to speculatively load the resource pointed to by the src attribute of an img element.

Comment by Bruce

Nicholas Hoizey said “I also hope “3g” will never be used as a speed indicator”

I agree; that’s just a silly strawman. The point is that we need to be able to detect that (declaratively) and make images respond to it, not that “3g” is a useful value.

Comment by David

@Matt “Embeded images are not the domain of CSS”
Maybe, but displaying different images depending on a screen capability definitely is. It IS purely decorative (even your cropped example is decorative). We really are talking about the style of an image (pixelated/unprecise or highres/detailed), and something else that has nothing to do with style nor semantic (performance).

Can we please separate the problems.
Let’s recap about alternative images :

- It shouldn’t be in HTML (Because, Bruce, your arguments are valid for every styling features then)
- It shouldn’t be solved with media queries based on viewport size. Even though, the UA or the author can take it into account to choose a suitable size to display.
- It shouldn’t be solved with media queries based on the network speed. Even though, the UA can take its speed into account to choose a better size to query (and possibly resize).

We are only looking for a mechanism to get different resources based on different dimensions. With that we can solve every use case very simply.

Comment by Terence Eden

Connection bearer isn’t the same as connection speed – I can be connected by WiFi to my 3G phone which is on EDGE in a congested network.

Personally, I’m a fan of the server sending down an appropriate image.

If the “accept” header says JPG but not PNG, you should serve a JPG.

If the “current-speed” header (remember, it can change during a session) is below a certain threshold, the compression of the image (or the format) should be altered.

If the “max-width” header (or UA sniffing) is 500px, serve up an image no larger than that.

So, the HTML scribe only has to write
img src=”foo”

And then the browsers sends “Please, sir, may I have a JPG plz. Also, current speed is 5kBps. Nothing larger than 320 though, thanks!”.

Computers are there to do the hard work for us. We shouldn’t be writing extra markup in every single new document.

Get the silicon slaves to do it all. Or start using SVG ;-)

Comment by David

@Bruce “The point is that we need to be able to detect [speed] (declaratively) and make images respond to it”

But this is nonsense. Is it April 1st, or the international troll day, or has everybody smoked weed ?

Let the author declare the alternatives with their dimensions and/or size. Point.

Choosing the most suitable one among those alternatives is the UA job. It can decide which is the best for him, based on the declared dimensions or size, and on its capabilities. Can’t it ?

Even Henri is high !
What (ganja party) am I missing here ?

Comment by Rasmus Fløe

Hmmm, if attr() could be wrapped in a url() it would be a workable solution?

<img src=picture.png data-medium-res=”picture-medium.png” data-low-res=”picture-low.png” alt=”a mankini”>

@media all and (max-width:600px) {
img[data-medium-res] {content: url(attr(data-medium-res));}

@media all and (max-width:320px) {
img[data-low-res] {content: url(attr(data-low-res));}

@media all and (network-speed:3g) {
img {content: attr(alt);}

Comment by Matt Wilcox

PS: With regard to this not really being a problem, and my agreeing with that statement in the long-run. Adaptive Images means you can fix it now, and when it isn’t a problem you simply delete the .htaccess rule and the ai-cache directory. Done. No need to change your mark-up yet again. You’d never know AI had been there ;)

Comment by Nicolas Hoizey

Even if I understand it would be less obvious when reading, I think we need two attributes “min-network-speed” and “max-network-speed”, and values in “Mbps”.

We could then write this:

<picture alt="angry pirate">
<source src=small.jpg media="max-network-speed:0.5Mbps">
<source src=small.jpg media="max-width:320px">
<source src=big.jpg media="min-width:800px; min-network-speed:1Mbps">
<source src=big-with-compression-artifacts.jpg media="min-width:800px; max-network-speed:1Mbps">
<source src=medium.jpg> <!-- 320px < width < 800px & network-speed < 0.5Mbps -->
<!-- fallback for browsers without support -->
<img src=medium.jpg alt="angry pirate">

A few notes:
- I use .jpg instead of .png, I can then “play” with compression ratio for different network speeds with the same viewport.
- I don’t know how to deal with the edge case where the network speed is exactly 1Mbps. With MQ based on width, we usualy use min-width:100px and max-width:99px, there is never (I hope) a width=99.5px.
- playing on both available width and network speed can lead to a lot of alternatives

Comment by Bruce

Terence said

So, the HTML scribe only has to write img src=”foo” And then the browsers sends “Please, sir, may I have a JPG plz. Also, current speed is 5kBps. Nothing larger than 320 though, thanks!”.

Computers are there to do the hard work for us. We shouldn’t be writing extra markup in every single new document.

I’ve got lots of sympathy with what you say. On the other hand, mobile browsers have always made good guesses at resizing images and rearranging websites for their screens. But those blumming designers demand more and more control, so we need to give it to them.

Comment by Matt Wilcox

I also thing Terence is a good example of why we need bot ha server side and a client side solution. It IS ridiculous to have to duplicate so much when a lot of the time we don’t need a semantically different resource, just a down-sampled version of the core one.

Comment by Matthijs

Maybe we should take a step back and ask ourselves if we aren’t assuming too much. If we aren’t deciding too much for the user.

What if I’m browsing on my phone, visiting a certain website, on a slow connection somewhere. I want to check out a few of the latest pictures. In detail even, possibly.

Even if the website could measure my connection speed perfectly, why would or should the website prevent me from viewing and downloading the real, maybe bigger image? Why would it only send me a scaled down 320px tiny image? Maybe I just want to view the real image and zoom in to view the details. Or download the real (big) image for later use.

Sure I can understand that in many cases a user doesn’t want to download huge images on a small phone and/or a slow connection. But that problem has always existed. My desktop PC can be on a slow connection as well. I don’t want a website to send me -only- smaller images in that situation neither.

Maybe the whole solution should be build in the browsers. That way a user decides what to view (and download). It’s the same thing as default font size. I might set my browser font size to 30px because of poor vision. All websites I visit then have (hopefully) 30px font sizes. So the same should be the case for images. When I’m on my mobile app on the road and want to save bandwidth (expensive!) I set my browser to only download low-res versions of all images (or none at all).

If an image is important content, it should be delivered in full. And then it is (or should) be on the user to decide what to download. If it’s not that important it might as well be a small default size. Or not exist at all.

Comment by David Goss

Hi Bruce

Great to see lots of big brains working on this problem at the moment. I’m beginning to think that, whilst some of the solutions people have come up with are ingenious, there just isn’t going to be a good/clean/practical enough solution with current client-side standards, so perhaps we should focus our efforts on finding a decent declarative solution and getting it implemented.

I strongly feel that this is not the domain of CSS. Images within the HTML are content and that’s where they should stay. Also, I definitely think we should have connection speed media queries.

I like your &lt;picture&gt; idea, except for the &lt;picture&gt; element itself — it would be great if we could adapt/extend &lt;img&gt; to serve the purpose and still degrade gracefully. Quick idea:

&lt;img multisrc alt="a mankini" src="img/mankini.jpg"&gt;
&nbsp;&lt;source src="img/mankini-medium.jpg" media="screen and (min-width: 400px)" /&gt;
&nbsp;&lt;source src="img/mankini-large.jpg" media="screen and (min-width: 600px)" /&gt;
&nbsp;&lt;source src="img/mankini-print.jpg" media="print" /&gt;

So, any &lt;img&gt;s with the multisrc attribute should have a closing tag, and their containing &lt;source&gt;s are cycled through and used as the src if their media queries match.

Just did some quick tests in current versions of Chrome, Firefox, IE, Opera and Mobile Safari. All will ignore the closing img tag and add the imgs and sources to the DOM at the same level, in sequence, with the sources being effectively ignored, and the &lt;img&gt;s render.

So, it might work. I need to do more research and write a blog post.

Comment by Gerben

My solution would be something entirely different. We could just use something that is already here, namely, Progressive JPG. The only thing that needs to be added is that browsers stop downloading if enough detail is available to show the image in it’s current dimension. The smaller the image is in the layout, the sooner the download can stop.

As a nice addition, if the layout changes (e.g. resize window) and more detail is needed (image is bigger in the new layout), the browser can just download the extra info by using the http range headers. No need to re-download the old data. If someone want to save the image the browser could request all the rest of the data, and the user gets the same image as someone saving it on his desktop pc.

Also this method degrades gracefully in browser that don’t yet support this ‘progressive/partial download’, they just waste bandwidth.

It could also solve the network-speed problem much better, as on slower connections you get the less detailed version automatically, but if people really want more details they can just wait a bit longer. The one-web idea, also applies to images. Why would someone with a smaller screen not want the big image. H might want to download and print it out to hang on the wall. Why would someone one a 3G not want a better quality image? He might be willing to wait a bit longer to get some better looking image, he might also want to print the image, which would look like crap with the above solutions.

With this method everyone can get the same image. Developers only need to create just one image, and don’t need to fiddle with extra html and media queries, that probably need the change if someone changes the layout so the content area is 100px wider.

I say one (progressive) image to rule them all :-)

Comment by Bryan Rieger

I usually try to stay out of these conversations as the whole question of how to handle images on mobile devices has been plaguing me for many years now, and I’ve already tried/experimented with the majority of techniques currently being discussed. That said, there may be something of value to the conversation lurking about my head, so here goes nothing.

WARNING: I have no answers, only a few related ideas that may be helpful. This also goes beyond ‘responsive images’, but it’s been my experience that images are just the tip of the iceberg when it comes to content adaptation.

Bruce mentioned,

…while currently CSS Media Queries allow us to detect screen width and pixel density, we need new media features such as network speed/ bandwidth.

This is really key, and would be insanely useful. While someone will always tell you to optimise images for mobile devices, there will always be someone who wants the highest quality possibly on their high density device over wifi. Without any indication of the network connection and/or speed it is impossible to adequately adapt content for these contexts. Ideally, this would be available via JavaScript and media queries.

FWIW I’m firmly in the “images need to be handled in markup only if they are content” camp. Of course adaptation via CSS is already possible today using media queries, for more ‘decorative’ stuff (although it would be nice if browsers only loaded resources that were actually relevant to the current media queries).

The rest of this comment deals with the problem of images as content.

Regarding the idea… lovely, especially with media queries – although we may want to consider adding a few other media queries. For instance, does the device support SVG, and if so can we supply an SVG image in place of the default JPG or PNG? FWIW we ( currently polyfill this type of tag functionality via the server. Essentially, within our content we specify an image as so:

<img src="resources://image/bob" alt="bob" />

The server before sending any content to the browser then filters all image tags[1] looking for any src attributes beginning with ‘resources://’. The src value is then parsed and the related objects are then retrieved:

&lt;image src=&quot;bob&quot;&gt; [2]
&nbsp;&lt;var profile=&quot;{'width':[0,320]}&quot; src=&quot;bob.png&quot; alt=&quot;bob at age 3&quot; /&gt;
&nbsp;&lt;var profile=&quot;{'width':[0,720], 'svg':1}&quot; src=&quot;bob.svg&quot; alt=&quot;playtime&quot; /&gt;
&nbsp;&lt;var profile=&quot;{'width':[321,720]}&quot; src=&quot;larger-bob.png&quot; alt=&quot;fishing&quot;* /&gt;

[1] We also filter HTML5 tags such as article, section, etc and replace with div.article, div.section, div.etc and doctype as required for older devices that really don’t deal with non XHTML-MP content very well.

[2] Ideally, the alt content shouldn’t change so drastically (that whole ‘thematic consistency’ thing), but there’s no reason they can’t be adapted also as required.

The ‘profile (JSON)’ attribute is parsed for each variant and then matched against the actual device profile (previously created using a combined feature detection and device database approach – this usually solves the dreaded ‘first-load/cold-start’ problem). I think an approach more in-line with media queries is a better long-term approach, but this works today without having to guess what may or may not happen regarding media queries specs/features in the future. I’ll refactor/remove it when the spec is clear.

The tag is then re-written using the data supplied in the appropriate variant so the final, transmitted page always contains semantically correct, cruft-free image tags. We also currently employ this approach with tags where different components require different behaviours based on the layout supported on the device.

We also wrap the appropriate related resource variations in a JSON object which is then sent to the browser and accessible via a window.resources object. This enables us to switch resources (using JavaScript) should the device move into another layout on an orientation change (i.e.: resize event). An early example of this can approach be found on the site.

We’re also playing with this approach in terms of dealing with content (or chrome) that really needs to be adapted for varying device capabilities, i.e.: large data tables, canvas based components, complex navigation components, etc.

&lt;switch xpath=&quot;//table[id=bigdata]&quot; css=&quot;table#bigdata&quot;&gt;
&nbsp;&lt;case profile=&quot;{'width':[0,320]}&quot;&gt;
&nbsp;&nbsp;&hellip;insert small table variant here
&nbsp;&lt;case profile=&quot;{'width':[321,720], &quot;canvas&quot;:true}&quot;&gt;
&nbsp;&nbsp;&hellip;insert medium table variant with charting view here
&nbsp;&nbsp;&hellip;insert default table variant (if it's not in the content already)

We ended up going this route as early transformations experiments with XSLT were far too complicated to warrant the effort, and more importantly felt like a huge step backwards IMHO.

A few other notes in terms of other comments:

The ‘delay’ as mentioned by Jake and Nicolas is a fabulous idea!

I do think the content: url(attr(data-medium-res)) idea that Rasmus and Nicolas put forth has *lots* of potential for browsers in the future – and oddly echoes some of the ideas in my ‘resources://’ hack above.

Sending device properties via headers is not a bad idea, but I’m not convinced that screen size alone is enough – especially in the long term. That said, sending every possible property in the header with every request is insane. Perhaps something along the lines of (containing only the properties the server actually requires) could be automatically grabbed by the client prior to making any requests to the server. The client then appends only the requested device properties with each request.

Anyway, that’s the end of my brain dump.
If there’s anything useful in there please take it and run with it.

I look forward to seeing what you guys come up with.

Comment by Jake Archibald

Love the progressive JPEG idead. If the image was full width & the browser was resized wider, the download could simply continue from the previous point.

Comment by Jordan Gray

There’s a school of thought that says everything will be 300ppi and networks will be fast enough, so this is really an intermediate problem until everyone starts using highres graphics and all displays go from 150 to 300. Standards are long term, and by the time we have a standardised version, the problem might have gone away.

(Unexpected tangent warning!)

I can buy that this may soon be the case in developed countries, but what about the situation in developing countries?

For many people in developing countries a mobile phone is their sole point of access to the Internet. Not everyone has a smartphone, of course; and of those who do, they are predominantly Nokia (Symbian). The more savvy among them will be using Opera, but most just use whatever comes with their handset. I work for a technology recycling company and have a good idea where our inventory goes, so I know they are often using the same phones that we were using several years ago. For most people who do have Internet access, we’re talking relatively old handsets and dated browsers.

Cellular coverage is another issue. Again, this has progressed in leaps and bounds, but many countries still have relatively poor coverage. Heck, 3G coverage isn’t a given in much of the developed world; how long do you think it’s going to be before Africa or Latin America have excellent 3G coverage and device penetration? What about 4G? And remember that with everyone using a mobile as their sole means of access, contention is going to be fairly high.

Which brings us back to the question: when are they going to be ready for 300ppi images and speedy, reliable data?

It’s worth remembering some of the lessons we’ve learned from and about the Web. That information empowers individuals and communities, and that diversity and democracy empowers the Web. People in the developing world have a lot to gain from access to, and participation in, the online world.

Think about how the Internet could transform Rwanda. Consider the real possibility that Twitter might have tipped the balance in forcing the international community to intervene more effectively in the Rwandan Genocide, that it has already facilitated the downfall of a corrupt regime. We’re potentially talking about real social and economic change.

It’s sobering to consider, but responsive design isn’t just about someone browsing Facebook from their iPhone on lunch. It’s also about connecting us with a growing number of people whose online experience is far more limited, who live in places where the infrastructure lags behind much of the developed world, and yet who stand to benefit from it in ways we can’t imagine. I think it’s optimistic to believe that an explosion in infrastructure and handset availability will eliminate them from consideration any time soon. They deserve to be part of the conversation.

Comment by AlastairC

I also love the progressive jpg idea, that seems to be the ‘native’ solution, especially as pixels are noticeably a relative unit these days (with zoom and high-res displays).

A browser could work out the size of the picture in the layout, and download enough to fill in the pixels for its resolution.

I’ve not heard it talked about much, it would be good to hear an explanation of why that wouldn’t work, or why people haven’t tried that already?

Comment by David

I’ve also been thinking about the progressive JPEG, but I don’t think it can produce a scaled image of acceptable quality unfortunately.
Resizing an image is not trivial.
Moreover that doesn’t address Matt’s use case where one want a cropped image instead of just a scaled one.

Another advantage of CSS over HTML is that you can reuse your declaration, instead of repeating the alternatives for every img tag.

Comment 26 (in French unfortunately) is a perfect solution.

Comment by MicroAngelo

Looking back over the other comments here, it’s good to see Terence and I are on the same page (comments #8 & #16), with a few +1s about, for the idea of using content negotiation/the accept HTTP header to get the server to do the heavy lifting.

As a few have noted, if in the future the markup for even a simple thing like an image is ten lines long, I think we’ve gone wrong somewhere!

It also solves the use case where you do want full-res images on a mobile/low bandwidth device: the UA can start off downloading the small version and then re-request a larger higher quality version as you zoom in. Like Google Maps but for regular images.

I like the progressive JPEG idea, but agree in practice that this doesn’t really work, plus it’s only one specific filetype, whereas the HTTP header idea can work generally with any filetype.

Interesting that Bruce brought this up, given that Opera Mini effectively does this, but using their servers rather than getting each server to do the work/be configured as you like. How does Opera Mini report available bandwidth? Or is it one-size-fits-all low res?

Comment by Bruce

MicroAngelo asked “How does Opera Mini report available bandwidth? Or is it one-size-fits-all low res?”

Opera Mini has a preselected compression levels for images (Low, Medium, High), some of them available as settings in the client’s menu. They are not dependent on bandwidth available.

Comment by Terence Eden

The more I think about this, the more I am convinced that I am right and everyone else is wrong ;-)

We currently can’t get people to put in “alt” tags for images – despite the clear benefits to readers with visual impairments (and SEO).
The greatest minds on the web can’t manage to properly escape their HTML in the comments field of this blog.

What hope is there for legions of webmasters to correctly put in multiple image formats, and sizes, and apply them correctly?

In the seminal memo “Cool URIs Don’t Change“, TBL argues that images and files should be refereced without their file extension – and that you should:

Set up your server to do content negotiation
Make references always to the URI without the extension

References which do have the extension on will still work but will not allow your server to select the best of currently available and future formats.

So, ideally, we should say

<img src=”BruceInMankini”>

If the browser can only display JPG, that’s what the server should send.
If the browser has a maximum resolution of 320px wide, then the content server should send a file of the appropriate size.
If the browser is on a limited bandwidth connection, the server should send a file of the appropriate compression.

I would expect browsers to have “switches” to toggle this behaviour if the user really wanted to download a 5-megapixel image over their GPRS connection.

No webmaster in a million years is going to create jpg, webp and all the future formats – and then re-render them again in lower resolution for iPhones. And then even lower for wristwatch browsers.

And, dare I say it, the same should apply to the video and audio elements.

If the browser supports formats x, Y, and z at 640*480, then the content server should decide whether it has an appropriate resource already – or whether to generate them afresh,

Computer time is quick and cheap – human time is expensive. Let’s leave the silicon beasties doing what they do best and let us dumb carbon-based lifeforms write simple HTML.

Comment by MicroAngelo

Ahh, interesting how Opera Mini works: the User can set their preferred compression level to be saved on the browser. And I think this should be something we shouldn’t underestimate: although we can try to help present things in the way that we *think* would be best for the User, we shouldn’t force our assumptions on them.

For example, I often use GPRS on my phone when I’m working, and actually do want full res images. Even if I’m presented with the low-res one first, I should be able to override this.

And it appears that Terence also agrees with me that video should use Content Negotiation to get the correct codec/res too… we really are on the same wavelength here! :)

So, I say we need to extend the HTTP/1.1 “Accept” header to include information like bandwidth and screen size/pixel density etc. which can be sent on every request (so can fluctuate e.g. with rotation/reception/wifi etc.), and then configure the servers to do the heavy lifting.

Comment by Bruce

“we need to extend the HTTP/1.1 “Accept” header to include information like bandwidth and screen size/pixel density etc. which can be sent on every request (so can fluctuate e.g. with rotation/reception/wifi etc.), and then configure the servers to do the heavy lifting.”

I know Matt Wilcox was asking me if this could be extended. Yes, it could. But adding any info to a header that is sent with every request is troublesome as it adds squillions of bytes across the network.

This is why, IIRC, “X-Do-Not-Track” was abbreviated to “DNT”, for example.

Comment by Anne van Kesteren

Content negotiation (indeed the Accept header and friends) largely failed in practice. In part this is because it is hard to get right and in part it is because other than the User-Agent header there never really is much granularity to go by. Basically anything involving HTTP (even if it is simple, which content negotiation is not) is hard for most developers. It is one of the problems we face with getting CORS more widely adopted.

This is why e.g. CSS has client-side negotiation (using Media Queries), as do the HTML media elements.

I have not looked into much because I largely gave up on content negotiation, but I suspect it would also work poorly with URL fingerprinting, content distribution networks, and similar caching techniques.

Comment by Terence Eden

Is it a header? Or is it User Agent? UAs are already ridiculously verbose, is adding “res(480*800)” really so bad? To me, it’s an identifying characteristic of the device.

The Nokia C6 is already 190 characters long.
Mozilla/5.0 (Symbian/3; Series60/5.2 NokiaC6-01.3/025.007; Profile/MIDP-2.1 Configuration/CLDC-1.1 ) AppleWebKit/533.4 (KHTML, like Gecko) NokiaBrowser/ Mobile Safari/533.4 3gpp-gba

As for speed, that’s slightly trickier. I may request the page while I’m in HSDPA (14Mbps) then the train moves and I’m in GPRS (54Kbps). So the speed could be changing between requests.

My knowledge of HTTP is not what it once was – would this be suitable for the OPTIONS method?

Comment by Niels Matthijs

Standards are long term, and by the time we have a standardised version, the problem might have gone away.

One problem might go away, but it is just as quickly replaced by a similar problem. Especially when talking limitations. You can be sure that when networks and displays are capable enough to answer to our current demands, others will try to push the limits and the same problem will resurface.

So I say: standardize it, the future will thank you for it.

Comment by Matt Wilcox

I think we really need to question our assumed best-practices with regard to header information. Yes, the argument that it will add bytes to all network requests is true – but is that a worse thing than continuing along this dumb-network path?

I really want to avoid the “here and now” mentality that landed us with CSS not accepting % as valid units for border-width. Just because something isn’t used *right now* does not mean that it isn’t a good idea to implement.

We’ve seen the need for additional headers for years and years – instead we abuse the UA string. It has become excessively bloated, arcane to use, and increasingly un-reliable because browser vendors are abusing it. Send headers. It’ll add bytes. Are a few bytes a problem measured to:

a) the potential that becomes available when the server is aware of the client’s capabilities.
b) the massive saving of bandwidth and file size content negotiation can make.

Bare in mind that while it won’t work mean an automatic net gain for desktop, it’ll almost always result in a net gain for mobile, and the internet is headed for mobile.

I wonder if SPDY could work around the supposed “oh my god no never the internet will fall to it’s knees” horror of sending a couple more bytes per request?

Comment by Matt Wilcox

Or, what’s to stop the browser sending a header on first connection to a domain that says “I can send additional data per request if you like” and the server sending a “yes please, for the following data types: jpg, gif, png”.

Default to sending nothing extra, and when you get a response that says it’d like some more details flip the client behaviour and re-request the original resource.

Comment by Carsten

Just my two cents.

Standard image tag:
<img src=”path/to/image.png” alt=”Alternative Text” width=”960″ height=”520″ />

Standard video tag:
<video width=”640″ height=”480″ controls=”controls”>
<source src=”path/to/movie.mp4″ type=”video/mp4″ />
<source src=”path/to/movie.ogg” type=”video/ogg” />

Meta Tags:
<meta name=”resolution” size=”320″ src=”{data[mediaurl]}{*-320}.{data[extension]}” />
If screen resolution is 320px or lower: Browser loads “path/to/image-320.png”, “path/to/movie-320.mp4″, “path/to/movie-320.ogg”

<meta name=”resolution” size=”480″ src\”{data[mediaurl]}{*-480}.{data[extension]}” />
If screen resolution is between 321px and 480px: Browser loads “path/to/image-480.png”, “path/to/movie-480.mp4″, “path/to/movie-480.ogg”

<meta name=”resolution” size=”800″ src=\”{data[mediaurl]}{*-800}.{data[extension]}” />
If screen resolution is between 481px and 800px: Browser loads “path/to/image-800.png”, but still loads “path/to/movie.mp4″, “path/to/movie.ogg”

Comment by Marc Brooks

Why not make the browser tell us?
Since we’re talking about changing the browser’s behavior anyway, why not actually make it offer up the information to properly solve this problem server-side? Let’s have the browser send the information that is available to the css media queries in the HTTP GET request headers caused by the one-and-only whatever.png tag or CSS url(“whatever.png”).

To wit:
X-Media: “screen color:24 resolution:120dpi orientation:portrait device:640×480 viewport:500×300”

(obviously, the aspect-ratio and device-aspect-ratio can be computed, color-index seems meh)

The nice thing about this is that it is really up to the user-agent’s current needs to determine what should be displayed, thus what should be requested. If someone zooms in, the user-agent can ask for a bigger version. If they switch to print view, we can ask for the monochrome version in higher DPI and all is warm and fuzzy. If the server doesn’t care, it serves up the one-true-image. If the server cares, it can serve specific versions. Cache logic can be driven by the standard content-negotiation logic (and left non-cached until it is).

The best part of ALL of this is that the CSS can just reference an image, the html can just include a img tag, etc… and the browser’s request can be driven by what it knows already about the size/placement/device of the image.

Comment by What The New iPad’s Retina Display Means for Web Developers | TransAlchemy

[...] Notes on Adaptive Images (yet again!) — Opera’s Bruce Lawson rounds up problems and solutions facing anyone trying to serve up different images based on screen size. Related Posts:What The New iPad’s Retina Display Means for Web DevelopersAnalyst Stands by Report: Samsung Only iPad Display Maker Shipping En MasseNew iPad Shows Up 3 Days Early in Vietnam ? Benchmarks, Unboxing EnsueCorporate Types Are iPad-Crazy, Survey FindsTED-Ed, Khan Academy Enable Flipped Classrooms This entry was posted in Wired and tagged community, html, iPads, Jason Grigsby, Responsive Image Working Group, Vector Graphics, web developers by TransAlchemy. Bookmark the permalink. [...]

Comment by {Best Souvenirs} |

[...] Notes on Adaptive Images (yet again!) — Opera’s Bruce Lawson rounds adult problems and solutions confronting anyone perplexing to offer adult opposite images formed on shade size. Thanks for passing by: ↓ Name* [...]

Comment by Rani

It seems like there is a thoughts overload here. I also agree about having media queries in HTML so that the worry about editing the CSS will not be a fuss. What’s more important for me is that there’s always the versatility in images to keep their quality as expected.

Comment by Interview With Bruce Lawson of Opera | JTB Productions

[…] I think the web stack is in pretty good shape these days. There’s work to be done making sure that sites can work offline (Appcache-done-right, in whatever guise it comes back) and with web payments. The lack of any useful way for developers to deal with responsive images is a problem, 18 months after it was flagged up. […]

Comment by Interview With Bruce Lawson of Opera

[…] I think the web stack is in pretty good shape these days. There’s work to be done making sure that sites can work offline (Appcache-done-right, in whatever guise it comes back) and with web payments. The lack of any useful way for developers to deal with responsive images is a problem, 18 months after it was flagged up. […]

Comment by Interview With Bruce Lawson of Opera | GMancer

[…] I think the web stack is in pretty good shape these days. There’s work to be done making sure that sites can work offline (Appcache-done-right, in whatever guise it comes back) and with web payments. The lack of any useful way for developers to deal with responsive images is a problem, 18 months after it was flagged up. […]