Archive for the 'HTML5' Category

Reading List



Thoughts on Adobe Edge

Last week, I dragged my snot-filled carcass down to London for a day-long presentation by Adobe called Create The Web.

I’m an occasional user of Adobe products: I used Dreamweaver in my last job (and was a beta tester for a previous version), and use Photoshop from time-to-time, although use about 2% of its capabilities. I also have some chums at Adobe, but they’re weedier than me so didn’t try to threaten me to be nice to them.

In passing, it’s interesting to note that while Twitter has lots of griping about Adobe products, they managed to get about 800 people into a Leicester Square cinema for a full-day product pitch. That suggests Adobe retains significant mindshare.

I was there because I wanted to see their new Edge range, as the tools that authors use to make websites directly affect the quality of the code of those sites, which directly affects the interoperability of the Web. I was therefore watching the day from two occasionally opposing perspectives: firstly, as the representative of a browser vendor that is sometimes disadvantaged by developers not using the correct prefixes etc, but also as a web author who, all other things being equal, prefers to use IDEs than type in code.

The decline of Flash has not diminished the appetite of site owners and developers for eye-candy and movie-like effects. That’s why Adobe is pushing for lots of new effects in CSS: cool stuff like CSS Filters and CSS Compositing and Blending and also anachronistic “we wish the Web were print” specs like CSS Regions.

Regardless of your opinions on Flash as a technology or plug-in, there’s no denying that the Flash Pro development IDE beats coding canvas in raw JavaScript. It’s also true (according to my good chum and co-author, Remington Sharp) that those developers just coming to canvas and motion graphics now are bumping up against problems that the Flash developers solved decade ago.

Therefore, Adobe has made a very smart move by making the new tools for animating stuff feel familiar to Flash developers. This was explicitly called out: Flash devs, your skills are not dead; the technology might be, but your experience and creativity are still in demand. This is true, and a shrewd business strategy.

(A word about product nomenclature and confusion: the DHTML tool that was called Adobe Edge is now called “Edge Animate”. “Edge” now refers to the whole suite of new Adobe tools. Animate seems to produce DHTML, flying <div>s around with CSS and JavaScript. Apparently, the Flash IDE can produce “HTML5” output, but that’s <canvas>. If your stuff involves text, use Animate so the text remains text and is therefore accessible.)

I haven’t had a chance to download my copy of Edge Animate yet. Lee Brimelow gave an excellent and entertaining developer-focussed presentation building up an animation and there were a few nasties in the code produced.

For example, it seemed that, by default, the code is simply empty <div>s with everything else injected by JavaScript. This means that someone without JS sees nothing at all. The “static HTML” export (which ought to be the default, in my opinion) at least puts the images in the markup, so a browser without JS sees *something*, albeit it unanimated. That code, however, was invalid: it produced <img>…</img>, and there is no closing </img> tag in HTML5 (or any previus incarnation of HTML, for that matter).

Encouragingly, Ryan from Adobe contacted me after I tweeted about this, asking for further feedback which I gave.

Other products that interested me were PhoneGap Build which allows you to upload a PhoneGap project and receive all the different packages through The Cloud™. I’d definitely use this. Who wants to dick around with all the different SDKs?

Edge Code is built on top of an open-source project started by Adobe called Brackets. Some hard-core developers (eg, those who wouldn’t pee on Dreamweaver if it were on fire) I spoke to seemed impressed. It seems they’re very serious about getting external developers involved; pull requests are reviewed daily; Agile sprints mean a quick iteration time, so your contributed code doesn’t languish interminably, and priority is given to external contributions.

The last product that interested me is Reflow, which isn’t out yet. It’s a drag-and-drop visual editor, which allows you to shrink the “stage” and, when your design starts to fall apart, set a breakpoint (which writes a Media Query) after which you re-design the page for the new page width. I haven’t seen the code that’s produced, but it feels to me an intuitive way for a designer to work.

Overall, it was exciting to see a company working hard to come up with a new strategy. The jury is out on the code it produces; Adobe is very heavily investing in WebKit and one of the presenters’ saying “this also works in old browsers like IE9 and Firefox” makes me uneasy. But it doesn’t have to be bad: Microsoft’s Web Essentials Visual Studio extension does an excellent job of adding all the vendor prefixes, even though Microsoft are heavily investing in IE.

The success of the product will ultimately depend on the price. And I’m curious to see how they’ll integrate with Dreamweaver, which is pricey and currently lacks many of the Edge features.

(Note: this is a personal post and doesn’t reflect the opinion of my employer).

Give Paul Robert Lloyd’s Thoughts on Adobe Edge a read, which prompted me to write this up. Fellow HTML5 Doctor Ian Devlin says PhoneGap Build is Awesome.


This week, I went to speak at Apps World, a great big “Global Developer Event, Mobile Marketing Conference” (according to its site). It was at Earls Court, full of people in suits (6000 attendees) and there was an average of 9.6 synergies per square meter.

As I was waiting for my panel, I listened to the preceeding talk “Is HTML5 the future?” (answer: “yes”). The first question from the audience was about when Digital Rights Management (DRM) will come to HTML5 “so my company can start using it”. Many of the other attendees were nodding their head in agreement.

At the end of my panel, the moderator Robin Berjon asked us “which feature would you like to see come to HTML soon?” and I rather surprised myself by answering “DRM”.

I don’t want DRM. I dislike DRM. Not particularly because I think everything should be free (I don’t; I like receiving royalties for the best HTML5 book) but because I don’t think it works. The DRM graveyard: A brief history of digital rights management in music demonstrates this excellently.

However, “the suits” believe it does work, and aren’t willing to invest fully in the web stack until there is some attempt at DRM. That’s why Netflix, Google and Microsoft have proposed the Encrypted Media Extensions specification that’s being worked on by the Encrypted Media Task Force of the HTML Working Group at the W3C.

Currently, DRM for video is the province of plugins. About a year ago, Lovefilm moved from Flash to Silverlight:

We’ve been asked to make this change by the Studios who provide us with the films in the first place, because they’re insisting – understandably – that we use robust security to protect their films from piracy, and they see the Silverlight software as more secure than Flash.

Simply put: without meeting their requirements, we’d suddenly have next-to-no films to stream online.

This is a problem for Linux users, as Silverlight doesn’t work on that operating system. It’s potentially a problem for Opera users, too, as Opera isn’t officially supported by Silverlight (although it does work). At least if DRM is moved into HTML rather than plugins, people using smaller browsers or operating systems will be able to choose whether or not to view DRM content. Now, they don’t have any choice at all.

But philosophy aside, ultimately, DRM in HTML is coming. WebKit appears to have started work on it 6 months ago, and Microsoft will presumably add it into Internet Explorer. And if those two giants support DRM, which browser would dare not to support it, and potentially be blocked by video sites? It’s like h264 on mobile: nobody likes it very much, but it’s a reality.

Like an unpleasant medical procedure such as having a catheter inserted, if it must happen it might as well come sooner rather than later.

(This is a personal view and does not reflect that of my employer)

Also see More on DRM in HTML5 in which I belatedly realise that the spec is just a plugin architecture.

Scooby Doo and the proposed HTML5 <content> element

Note: Since writing this, I’ve continued vacillating and now support a <main> element. Why I changed my mind about the <main> element.

Trigger warning: contains disagreement about accessibility.

I’ve been vacillating (ooh err, missus) for two weeks from one opinion to the other regarding a proposed (and rejected) <content> element. This weekend, The Mighty Steve Faulkner wrote an unofficial draft of a <maincontent> element.

Dude, where’s my content?

For a while, people have suggested that HTML add a <content> element that wraps main content, because many websites have something like <div id="content"> surrounding the area that authors identify as their main content, which they then use to position and style that central content area.

Fans of WAI-ARIA also like to hang role="main" on that area, to tell assistive technology where the main content of the page starts. I do this too.

The editor of, Ian Hickson, rejected a new <content> element:

What would the element _mean_? If it’s just “the main content”, then that is what the element’s contents would mean even without the element, so really it means the element is meaningless. And in that case, <div> is perfect, since that’s what it is: a grouping element with no meaning.

The primary argument against a special element is that it isn’t necessary, because the beginning of “main content” can be identified by a process of elimination that I call the “Scooby Doo algorithm”: you always know that the person behind the ghost mask will be the sinister janitor of the disused theme park, simply because he’s the only person in the episode who isn’t Fred, Daphne, Velma, Shaggy, or Scooby. (Like most Scooby fans, I’m pretending Scrappy never existed.)

Similarly,the first piece of content that’s not in a <header>, <nav>, <aside>, or <footer> is the beginning of the main content, regardless of whether it’s contained in an <article>, or <div>, or whether it is a direct child of the <body> element.

Authors do need to be able to identify their main content, both for styling (in which case <div> seems to be the most appropriate element) and as a target for “skip links”, in which case, the current <a href=”#main”>Skip nav<a> … <div id=”main”> pattern still does the trick.

It’s worth noting that people often code “skip links” believing it’s required by WCAG 2, but if browsers implemented the Scooby Doo algorithm that is explicitly not the case: “It is not the intent of this Success Criterion to require authors to provide methods that are redundant to functionality provided by the user agent.”

Many assistive technology useragents understand the ARIA role=”main”, so skip links should be unnecessary; ATs can hone in on <div id=”main” role=main> by themselves, even without supporting the Scooby Doo algorithm.

This suggests to me that a new element isn’t required. But…

Paving cowpaths, ease for authors

Chaals (ex-Opera, now Yandex) wrote

To turn the question around, if it is more convenient for authors to identify the main content, and not think about the classification of other parts, should we offer that facility as part of the platform? Or does it make sense to say that only the exhaustive identification of all supplementary content is an appropriate way to mark up a page anyway?

Chaals argues that it makes authoring easier – suddenly you get extra accessibility by just adding one <content> element, rather than adding the other elements that the Scooby Doo algorithm can then exclude. People using CMSs, who only control the textarea that gets lumped in as “main content” and can’t touch the surrounding areas can now add an element, without having to ask others to tweak templates.

But then, they can do this already, by surrounding their content with <div role=”main”> and this already works in assistive technologies.

A flawed argument for a new element is that it paves a cowpath, so should be added to the language. It’s certainly the case that <div id=”main”> and <div id=”content”> are very frequently found in pages – they were #2 and #6 in the most-frequently used ID attributes in the 2008 MAMA: What is the Web made of? report.

But not every cowpath needs paving. If it did, we’d also have a <logo> and a <container> element (#4 and #5 respectively), and we’d be recommending tables for layout. If something can be done automatically, without requiring extra authorial work, shouldn’t that be favoured? In the same way that we like HTML5 form types as they’re baked into the browser, shouldn’t the Scooby Doo algorithm be preferable?

Of course, the Scooby Doo algorithm requires the author to use <header>, <footer> <nav> and <aside> — but if (s)he doesn’t want/ isn’t able to author HTML5, ARIA’s role="main" is there precisely as bridging technology.

There’s also the argument that authors expect there to be a <content> element, so its absence violates the Principle of Least Surprise. But I’m not sure that’s a valid argument. Implementing the Scooby Doo algorithm would mean that pages whose author does nothing for accessibility can be made so that their main content area may be programmatically determined. ARIA exists for pages that aren’t in HTML5, or until the Scooby Doo algorithm is widely supported, and analysis shows that most ARIA is correctly used by authors.

Why add an extra complexity, which is more to go wrong and thus potentially harms accessibility?

Also available:

On the publication of Editor’s draft of the <picture> element

On Tuesday, a w3C Editor’s Draft of a proposal for HTML Responsive Images Extension was published, edited by Matt Marquis and Microsoft’s Adrian Bateman:

This proposal adds new elements and attribute to [HTML5] to enable different sources of images based on browser and display characteristics. The proposal addresses multiple use cases such as images used in responsive web designs and different images needed for high density displays.

It’s a mash-up of the strawman <picture> element I proposed in December last year, and the Hixie-blessed srcset attribute on img proposed by Ted O’Connor of Apple. (See Responsive images: what’s the problem, and how do we fix it? for more background.)

Of course, the publication of an editor’s draft means nothing. There isn’t any indication that any browser manufacturer will implement it. Philip Jagenstadt of Opera (but speaking personally) has already commented

Personally, I don’t think it’s a good idea to reuse the syntax of <video> + <source>, since it invites people to think that they work the same when they in fact are very different. It could be made to work, but the algorithm needs a lot more detail. Disclaimer: This is neither a “yay” nor “nay” from Opera Software, just my POV from working on HTMLMediaElement.

Those eager to bash Hixie and the WHATWG are using the new spec as if it were a cudgel; “this is how you deal with Hixie and WHATWG” says Marc Drummond.

I don’t think that’s productive. What is productive is the debate that this publication will (hopefully) foster. Two different but connected issues need debating. Firstly, the technical debate needs to continue. There are problems with the <picture> element, such as Philip points out, and its verbosity and potential to be a maintainability nightmare, as pointed out by Matt Wilcox.

And we need to work out how all can play a role in specifying the Web. The W3C dropped the ball with XHTML 2 because it was a philosophically elegant, intellectually satisfying specification that bore absolutely no relevance to the real world.

WHATWG picked up that ball, and grew the Web, because they knew what developers wanted: Flash and Silverlight’s application capabilities. The browser manufacturers who made up the early WHATWG knew that if those capabilities weren’t added to browsers, they would die.

But the gatekeepers of WHATWG aren’t jobbing developers or designers. Nothing wrong in that: neither am I any more. But their inability or unwillingness to account for the art direction use case has caused them to drop the ball here.

We need to get everyone’s voices heard: — people like Philip with implementor’s experience, people like Hixie with his encyclopaedic understanding of how specs inter-relate, and also solicit and understand the needs of those who make websites but can’t express their requirements in the language of specs, algorithms and IDL.

I don’t know how we can do this. The CSS Working Group had designers attending as invited experts for a short while, but I’m told that wasn’t particularly useful for either group. A group of designers called the CSS Eleven hailed themselves as “an international group of visual web designers and developers who are committed to helping the W3C’s CSS Working Group to better deliver the tools that are needed to design tomorrow’s web” and then promptly disappeared.

So congratulations to Matt Marquis and the others in the Responsive Images Community Group who have had the staying power to produce a spec for discussion.

Hopefully, this strawman spec can be refined into something workable that gives developers a simple, declarative way to send different-sized, art directed images depending on bandwidth and device characteristics. This means organisations waste less bandwidth, and it’s a faster (and cheaper) Web for consumers — you know, the great unwashed that we make websites for.

Added 8 September: The road to responsive images – useful introduction to actually authoring it in your websites.

Reading list




Super-NSFW Corner

  • Fifty Shades Generator – Traditionally, print and web designers had to make use of placeholder text known as Lorem Ipsum. Now, creatives can excite clients in more ways than one with Fifty Shades of Grey-inspired filler text.”

Credit card, bank account numbers in HTML5 forms

It’s not appropriate to use input type="number" for strings of digits such as bank account numbers or credit card numbers.

input type="number" is really for quantities, or real numbers. It’s implemented by Opera, Chrome and Safari as a spinner field (see the “shoesize” field in my HTML5 Forms Demo) – which isn’t a terribly good way to input a 16 digit number.

Although credit card and bank account numbers are colloquially called “numbers”, they’re really strings of digits – unlike real numbers, leading zeros are significant, and you wouldn’t expect to perform any arithmetic operations on them.

Therefore, the correct way to mark up forms that expect credit card/ bank account details is with input type="text". If you want magical HTML5 client-side validation, add pattern="…", with the value of pattern being a regular expression that describes the string.

In The Future™ you’ll be able to restrict such fields to numeric-only input, as a hint to the user, with the newly-specified inputmode=”numeric” attribute:

Numeric input, including keys for the digits 0 to 9, the user’s preferred thousands separator character, and the character for indicating negative numbers. Intended for numeric codes, e.g. credit card numbers.

Also in The Future™, you can help the browser autofill such a field by telling it that you’re expecting a credit card number and, if it has one setup by the user, it can autocomplete using the newly-specified autocomplete=”cc-number”.

So a full credit card number would be <input type="text" pattern="…" inputmode="numeric" autocomplete="cc-number">

(Added 27 August 2012:) Perhaps not so far in the future: Mozilla have just added inputmode support, and Chrome implements autocomplete with a vendor prefix.

(The WHATWG wiki has a good explanation of the rationale behind autocomplete types, NB: this text is from the original proposal; I haven’t checked if everything it proposed actually made it into the spec.)

Reading List


Sublime Text 2

I haven’t actually enjoyed using an editor since VAX EDT in my old programming days, but Sublime Text 2 is an excellent program that not only doesn’t get in the way but has lots of utilities and features.


Dell Extends Ubuntu Retail into India – Unreported, of course, because it’s about FOREIGNERS, but Dell have been featuring Ubuntu on a wide range of Dell computers in China, starting at 220 stores and expanding to 350. They’re also expanding in to 850 stores across India.


Specifying which camera in getUserMedia

getUserMedia started life as a nice little API that allowed the developer to grab video and/ or audio from a device.

navigator.getUserMedia({audio: true, video: true}, success, error);

It’s shipping for camera access in Opera 12 desktop and Opera Mobile 12 and implemented in Chrome nightlies.

The spec has ballooned since I last looked at it in detail. There is a new “constraints API” which adds significant complexity, while the language of the spec isn’t rigorous (yet). This is likely to delay standardisation yet the simplest use-cases that it originally met are the core features.

Originally, it had a method for authors to hint to the user agent which camera to use – you could suggest “user” for the camera that faces you, or “environment” for the camera that faces outwards. This has disappeared from the spec, to be dealt with later, once the complex capabilities & constraints API is fleshed out.

Should the author be able to specify which camera should be used? It seems to me that this would be a good idea. For example, my mobile phone camera is usually set to use the rear-facing “environment” camera, but if I were to start up a video-conferencing app (or any of the fun “Photo booth” apps on Shiny Demos) I’d like the app to switch to using the forward-facing “user” camera without my having to manually switch camera. (Some devices don’t have hardware switches, so would require me to go to a system preference to change which camera to use, which would be a major pain in the arse.)

I strongly feel that the Working Group should allow authors to choose which camera to use before fleshing out advanced capabilities & constraints APIs which are beyond the use-cases of many authors.

However, although most phones will continue to have two cameras, some devices like the Mito 611 have more than two. How do we write a specification for hinting which camera to use with multiple cameras?

Reading List

Web Standards

Web Industry


Heartwarming corner

“Raising an Olympian” – a 3 minute video from Kavita Raut’s rural Indian mum. Even though it’s an ad and I’m a cynical old bastard I found it quite touching.