Archive for the 'accessibility web standards' Category

Reading List

Why we can’t do real responsive images with CSS or JavaScript

I’m writing a talk on <picture>, srcset and friends for Awwwards Conference in Barcelona next month (yes, I know this is unparalleled early preparation; I’m heading for the sunshine for 2 weeks soon). I decided that, before I get on to the main subject, I should address the question “why all this complex new markup? Why not just use CSS or JavaScript?” because it’s invariably asked.

But you might not be able to see me in Catalonia to find out, because tickets are nearly sold out. So here’s the answer.

All browsers have what’s called a preloader. As the browser is munching through the HTML – before it’s even started to construct a DOM – the preloader sees “<img>” and rushes off to fetch the resource before it’s even thought about speculating about considering doing anything about the CSS or JavaScript.

It does this to get images as fast as it can – after all, they can often be pretty big and are one of the things that boosts the perceived performance of a page dramatically. Steve Souders, head honcho of Velocity Conference, bloke who knows loads about site speed, and renowned poet called the preloader “the single biggest performance improvement browsers have ever made” in his sonnet “Shall I compare thee to a summer’s preloader, bae?”

So, by the time the browser gets around to dealing with CSS or script, it may very well have already grabbed an image – or at least downloaded a fair bit. If you try

<img id=thingy src=picture.png alt="a mankini">
…
@media all and (max-width:600px) {
 #thingy {content: url(medium-res.png);}
 }

@media all and (max-width:320px) {
 #thingy {content: url(low-res.png);}
 }

you’ll find the correct image is selected by the media query (assuming your browser supports content on simple selectors without :before or :after pseudo-elements) but you’ll find that the preloader has downloaded the resource pointed to by the <img src> and then the one that the CSS replaces it with is downloaded, too. So you get a double download which is not what you want at all.

Alternatively, you could have an <img> with no src attribute, and then add it in with JavaScript – but then you’re fetching the resource until much later, delaying the loading of the page. Because your browser won’t know the width and height of the image that the JS will select, it can’t leave room for it when laying out the page so you may find that your page gets reflowed and, if the user was reading some textual content, she might find the stuff she’s reading scrolls off the page.

So the only way to beat the preloader is to put all the potential image sources in the HTML and give the browser all the information it needs to make the selection there, too. That’s what the w and x descriptors in srcset are for, and the sizes attribute.

Of course, I’ll explain it with far more panache and mohawk in Barcelona. So why not come along? Go on, you know you want to and I really want to see you again. Because I love you.

Reading List

Reading List

Wootarama! It’s my 100th reading list as the penultimate blogpost of this blog’s eleventh year. The Queen just sent me a telegram. You can send me Guinness or Laphroiag.

Reading List

Reading List ninety-nine. With a flake in it.

Reading List

Ooh,ooh, it’s the 98th Reading List (including last week’s Device Detection vs Responsive Web Design-themed list). Will I get to 100 before 2015?

On the accessibility of web components. Again.

I enjoyed watching Dimitri Glazkov’s introduction to Web Components Easy composition and reuse with Web Components, given at the Chrome Developer Summit. It’s an excellently-constructed talk that builds on the use-cases that web components address to make a compelling argument for the technology.

At 11 min 55 seconds, after a slide reading “Make HTML useful”, Dimitri says

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

There we are. Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements. Of course, this isn’t the full messaging; a 20 minute video can’t go into the nuances. But it’s what a lot of people are hearing.

Let’s straighten that out.

One of the advantages of oh-so-boring HTML was that certain elements carried default behaviours in browsers and assistive technology. Like, when you use this mark-up

<label for="form-name">What's yer name?<label>
<input id="form-name">

and you click on the label, the focus goes into the associated input. There’s no need for JavaScript, there’s no fancy stuff extra for a developer (except setting up the association with the for="" attribute), and there’s a significant usability and accessibility advantage for the end-user.

A recent HTML5 Rocks article by Addy Osmani and Alice Boxhall called Accessible Web Components begins with the words

Custom Elements present a fantastic opportunity for us to improve accessibility on the web.

Yes. Yes. Yes. (Thanks Addy and Alice!) It’s perfectly possible to make web components and custom elements accessible. Alice has an example which I’ve screenshotted in Opera (top) and Safari (bottom).

boxall

Note that in the Safari screenshot, the second column of sexy checkboxes don’t work at all – there is no checkbox. That’s because Safari doesn’t support web components. You’ll see the same in IE, or browsers without JavaScript.

Note that the first column does render in Safari, but it’s just normal checkboxes; they aren’t sexy web component-ised as they are in Opera. But – crucially – you can still interact with them, as they’re web components progressively enhancing silly old “useless” HTML. It works like this:

<input type="checkbox" is="io-checkbox">

Simple, huh? You have a silly old useless HTML element, and a new attribute that says “this is extended via web components into a special element I’m calling ‘io-checkbox'”. The web component inherits all the silly old useless behaviour like associating labels with form fields, activation with keyboard for free.

Compare with the sexy but not progressively-enhanced way that doesn’t work in older browsers (the second column):

<io-custom-checkbox tabindex="0" role="checkbox"></io-custom-checkbox>

There’s a super-whizzo-fabbo-megalicious UltraShiny custom element there, which has no graceful degradation. It needs a tabindex and a role there because who wants that silly old useless HTML behaviour? Not us! We’re post-HTML. Yay!

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Of course, not every new element you’ll want to make can extend an existing HTML element. In this case, you can still make your custom element accessible. Just because you’re in the super-whizzo-fabbo-megalicious UltraShiny world of web components, you can – and should – still add ARIA information to make your code accessible. Just because you’re hiding nasty code behind the Shadow DOM, it doesn’t mean that you can brush proper coding under the web components carpet.

You’d hope that those who are assiduously pushing components into the platform would ensure that their demos did this – after all, those demos are meant to be studied, copied and adapted by developers, right?

Wrong. Take a look at Polymer gmail, a “Polymer version of New Gmail app”. Patrick Lauke points out

Google has expertise in-house to create functional, beautiful, web-component stuff that is also accessible. It would be great if high-profile demos like these would actually take advantage of those resources to create things that work not just for sighted mouse/touchscreen users…

To which he received the reply

There’s plenty that can be done in the convenience of unlimited time and resources. If you’d like to help, please submit a PR.

A big demo of a Google cutting-edge technology, made by Google, and there’s no resources simply to make it accessible.

At Paris Web, Karl Groves and I talked about Web Components – the right way, we talked of extending existing elements, adding ARIA and suggested that web accessibility advocates actively fix issues on Open-Source projects. But I meant fixing small projects that you’re using in your own sites – like the WordPress Live Comment Preview plugin, which I tweaked, thereby making 44,837 sites accessible.

I wasn’t talking about fixing demos by a company with a $362.48 Billion market capitalisation. As Patrick Lauke so eloquently puts it:

My resources are currently a bit more stretched than Google’s…but I’ll put it on my to-do list ;)

I’m a fan of web components. But I’m increasingly worried about the messaging surrounding them.

Device Detection vs Responsive Web Design

This week’s reading list is devoted to Device Detection vs Responsive Web Design.

With all the cool kids getting into RWD these days, it’s time to have a look at the Device Detection companies again. Device Detection is the practice of matching a device’s UA string against a table of such strings and looking up certain characteristics of that device and then serving different websites accordingly.

Of course, the utility of such services is dependant on the quality of the look-up table: how many devices does it know about (all the ones in the world, ever?), how frequently it’s updated (have they added the Umbongo J2O TrouserPhone S+ that was released on Tuesday, yet?) and how accurate is that data (does the TrouserPhone S+ really have 178680979 X 7 pixel smellovision display?).They are, however, an order of magnitude more reliable than terrible CMS plugins or JavaScripts that were written years ago and which register IE11 as IE1, or don’t know Chrome exists. UA strings are comically unreliable, being the frontline in an unceasing battle between browser-sniffers who want to deny entry to certain browsers, and browser vendors who want their users to get a first-class experience.

Examples of Device Detection companies include scientamobile (WURFL), DeviceAtlas and 51Degrees. The databases owned by such companies do include device characteristics unavailable through client-side detection. For example, you can’t find out from JavaScript whether a device is actually a touchscreen device; the physical dimensions of a screen or the retail price of a device (which advertisers want to know, apparently – you only want to advertise yachts to gold iPhone or Umbongo J2O TrouserPhone S+ owners).

Mike Taylor, an ex-colleague of mine at Opera, now at Mozilla (and pathological hater of chickens) set up a collaborative document to collect use cases that people are trying to solve with UA detection (which can’t be solved by feature-detection), which is summarised by Karl Dubost (ex-Opera, now Mozilla) in User Agent Detection Use Cases.

Those who oppose Device Detection do so for philosophical reasons – it’s one web and we shouldn’t serve different content to different devices or browsers, or they are certain browser vendors: Internet Explorer, Firefox OS and Opera all have reasons to dislike browser sniffing or device detection (“this website is only available to iPad users”). Google uses device detection all the time on its properties, as do many other large companies.

The device detection companies have begun to issue reports comparing their products with responsive, client-side techniques. Here are three that I’ve seen this week:

They’re worth reading. Of course, case studies only go so far; every business, territory and site is different. One thing everyone agrees on is that performance matters – slow sites lead to fewer conversions. mobiForge has an article M-commerce insights: Give users what they want, and make it fast that claims

RWD sites were the slowest, on average, to load on mobile – 8.4 seconds – while dedicated mobile sites loaded fastest – in 2.9 seconds. Non-responsive desktop sites took 6.57 seconds to load.

I’d like to see proper A/B testing: a well-made responsive version of a site versus its “m-dot” equivalent, redirected from its canonical URL and assembled after a device look-up, across a variety of devices and network conditions. If we’re going to argue, it might as well be about data.

Update 1 Dec 2014: Here’s some initial research on the top 1,000 mobile websites: M dot or RWD. Which is faster? that concludes that “m dot” sites are 50% slower for time to first byte, and

RWD sites are VERY competitive on Visually Complete and SpeedIndex scores. The median values are within 5% for both metrics. Even though it appears that RWD is faster, there is enough fluctuation in the data that we should probably call it a dead heat.

Reading List

Reading List