Images are the bulk of what we move around the web; they comprise over 60% of the average web site’s payload. (Latest HTTP Archive stat: 61.3%)
Different use cases would ideally be served different image resources (“responsive images”). For example, a 27″ iMac plugged in to a company network can make use of a large HD image, but a Blackberry in a desert with a spotty wireless signal can’t. If you serve a large image to the Blackberry in the desert, the user might not get anything.
And what if the person with the Blackberry is bitten by a snake and needs to identify it as poisonous by sight? He finds the page of poisonous snakes but the photos are too big for his phone and connection. Maybe he sees one or two poisonous snakes, but not the deadly one that bit him. He dies. The man dies!!!! (Fool shouldn’t have had a Blackberry. Anyway…)
This is an improbable situation, and kind of silly. But we can easily think of more dire and practical examples. Basically, the content of the Internet can be useful in remote and / or time-sensitive situations.
For web developers, focusing on the problem of delivering images in 2013 is a worthy subject to noodle, especially if we are concerned with performance. In this area we will find our biggest wins — where we can offer the most value and make the biggest improvements — UX-wise and performance-wise.
So how do we solve the problem of responsive images? There are many ideas and opinions — hard work and good stuff. Here’s a comprehensive run-down with links to resources. Generally the tack is to serve up different versions (sizes) of an image, some with different crops, perhaps, that might be more suitable small screens / low bandwidth.
The biggest flaw of this mode of thought — selectively serving more appropriate images — is not poor support, or using the wrong technology for presentation, or trying too hard and shooting ourselves in the foot. It’s that it puts the burden of the solution on the content producer.
Not only is this unrealistic (few will do this extra work), but we should not raise the bar for anyone wanting to put a picture on the Internet. If the solution involves serving more than one image, it will never be elegant. But more importantly, it will never be practical.
For those of us who really care about this subject, the lure of a responsive images solution is like a candy house that we can’t help going into. But it’s trouble.
If we take a step back we can imagine what would really be nice: a responsive image file format that contains different versions of the image — a storage locker, as described by Christopher Schmitt. So tidy! And if our image editor could generate this file for us — nay, if this was the default for our image editor — that would require the least amount of work. We’d actually do it.
But new image formats don’t come along every day and many years must pass before there is wide browser support. We don’t currently have images with different copies of the image or…hold on…wait. Don’t we? Isn’t that what a progressive jpeg is?
Yes! The man lives!!!!
Problem solved, at least on the producer side. Yoav Weiss, an images+performance expert, talks about this possibility on his blog, exploring a scenario for a technical implementation. This is a eureka moment. We know it is possible because it’s observable; browsers download and display the first scan of a progressive jpeg before downloading the whole thing. They just need to calculate when they have sufficient information and stop.
Web browsers know more about the system they are installed on than content producers do, obviously. And knowing details such as screen size, pixel depth, and having a means to anticipate network speed and stability put browser vendors in a much better position to solve the problem of responsive images. Browsers want to be fast. It’s their most desireable quality and it makes them competitive in the market. If the problem of responsive images is a big issue for us as developers, it’s also a big issue for browser vendors for the exact same reasons.
Today we can offer videos and know the browser will not download the whole thing, blocking content. What is the difference between a video and an image, really? Why can’t we serve an HD image the same way, especially as the craving for HD images grows?
Now here’s where I really get in trouble by taking this idea to it’s most extreme place: someone who wants to share an image should be able to provide one that is carelessly large — as big as any end user can use, today and in the near future — without worry. Someone wanting to share an image should be assured that the web client will download only what it needs. “Joe Photographer” shouldn’t need to know anything about responsive images.
Remember when people used to email digital photos and it would clog your inbox and you would say “Stop doing that!” and hurt their feelings when they, in their ignorance, were just trying to share with you? Technical limitations made us jerks. My mom shouldn’t have to edit in Photoshop. My mom shouldn’t know about Photoshop. And I should be nicer to my mom.
The Internet Protocol of August 1979 refers to the robustness principle which basically says that a client should be liberal about what it receives. Specifically that it “should accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).” It might be stretching it to consider images “datagrams” and a poorly sized image as a “technical error.” Or is it? Regardless, it’s a fact that we do not include images among the things we liberally receive.
We should either change our philosophy and say we’re “liberal about what we accept, with the exception of images” or we can keep our philosophy and include images. And while we’re at it let’s make setting “height” and “width” with pixels obsolete — at least for web content producers. (Can we do this?! Hm…it will take a bit more thought.)
With progressive jpegs we have our foot in the door. Browser vendors can choose not to implement new solutions — new markup and attributes — but they can’t take something away from us that we’ve always had. We must not forget we have progressive images, they should always be the default for the web (why aren’t they, Adobe Photoshop?), and browser vendors need to focus on using progressive images as responsive images.
The problem with the direction that I’m advocating for is that it’s out of our control and we need to wait for a better implementation. And how can we wait, when this is one of the most important problems of web development? As long as we know our ultimate direction, I think it’s fine for us to create and use temporary solutions. That is, it’s fine for us to go into the candy house, as long as we remember that it’s not our house.
In summary, it should not be the responsibility of the content producers to discover the screen size, pixel depth and network connection of an end user and serve up a custom resource. This is information the client knows best. We should pose questions to browser vendors, and if there are any gotchas with the technical implementation of using a progressive image as a responsive image, we should better understand why.
I would love to get your feedback on this subject and I encourage you to disagree with me. I’ll get the comments fixed on this blog and respond to all. Please email me at nosbora at gee -mail period com.
more to think about: Daan Jobsis discovered that you can increase the size of an image while also increasing the compression and you get a smaller image that looks good on both standard and retina displays. The resolution is higher, the file size is smaller, and it looks great everywhere. More pixels cost us less. What?! Forget what you know. Even without progressive scans we can have faster, higher-resolution images. We should not be afraid to offer more pixels.