Archive for the 'mobile' Category

Notes on Google’s HTTPS Usage report

On Friday’s reading list I linked to Google’s HTTPS Usage report which said that

Secure web browsing through HTTPS is becoming the norm. Desktop users load more than half of the pages they view over HTTPS and spend two-thirds of their time on HTTPS pages. HTTPS is less prevalent on mobile devices, but we see an upward trend there, too.

(The report is undated, but as the data continues after October 2016, I assume it’s current. As an aside, please put dates on research and stats you publish!)

Erik Isaksen tweeted me asking “I’m wondering why ‘especially on desktop”. I replied with my speculations, reproduced here in longer form :

Despite the rise in mobile use, desktop numbers aren’t declining, and perhaps many people do as I do: I might search and compare products on my mobile, but I actually do the purchases on my desktop machine. It’s a lot easier to type on a full-sized keyboard than a virtual keyboard, and purchases on the web are still laborious. I doubt it’s just me; generally, users abandon mobile purchases twice as often as desktop purchases. (That figure is from Google’s tutorial on the Payment Request API. I’m eagerly awaiting completion of Opera’s implementation.)

Similarly, I never do online banking on my mobile, I always use my desktop machine which has a direct line into it. (Even though I know that my bank’s website is HTTPS. But when I visit my branch, I notice their internal systems are all using IE6…)

It’s also worth bearing in mind that many of the regions that are mobile-first are home to large populations of unbanked people, or populations who don’t use credit cards much. There’s a lot less imperative to offer local websites securely when there is no money changing hands through them, while the services that are popular everywhere (Gmail, Facebook etc) are already HTTPS.

I’m told that HTTPS is comparatively expensive for site owners in the developing economies, and advertising revenues are declining as more and more people use Ad-blockers: 36% of smartphone users in Asia-Pacific use ad-blockers; two-thirds of people in India and Indonesia (source) and statistics from Opera’s built-inn ad-blocker shows that Indonesia has the most ads blocked per person in the region.

I suppose the crux of my speculation is: do people perform different kinds of tasks on mobile and desktop? Some tasks – banking, purchasing – require more convoluted input and are thus more suited to desktop devices with a full-sized keyboard, and such tasks are performed on HTTPS sites.

But this is only speculation. Anyone have any hard data on why HTTPS is more prevalent on desktop than mobile?

8 November 2016: Amelia Bellamy-Royds suggested on Twitter “No hard data, but my guess: secure websites for social media, email, etc., are replaced by native apps on mobile.” This certainly maps to my own experience, as I used the Gmail and Twitter apps on Android.

On URLs in Progressive Web Apps

I’m writing this as a short commentary on Stuart Langridge’s post The Importance of URLs which you should read (he’s surprisingly clever, although he looks like the antichrist in that lewd hat).

Stuart says

I approve of the Lighthouse team’s idea that you don’t qualify as an add-to-home-screen-able app if you want a URL bar

Opera’s implementation of Progressive Web Apps differs from Chrome’s here (we only take the content layer of Chromium; we implement all the UI ourselves, precisely so we can do our own thing). Regardless of whether the developer has chosen display: standalone or display: fullscreen in order to hide the URL bar, Opera will display it if the app is served over HTTP because we think that the user should know exactly where she is if the app is served over an insecure connection. Similarly, if the user follows a link from your app that goes outside its domain, Opera spawns a new tab and forces display: browser so the URL bar is shown.

But I take Jeremy Keith’s point:

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

One of the superpowers of the Web is URLs, and fullscreen progressive web apps hide them (deliberately). After our last PWA meeting with the Chrome team in early February, I was talking about just this with Andreas Bovens, the PM for Opera for Android. We mused about some mechanism (a new gesture?) that would allow the user to see and copy (if they want) the URL of the current page. I’ve already heard of examples when developers are making their own “share this” buttons — and devs re-implementing browser functionality is often a klaxon signalling something is missing from the platform.

When I mentioned our musings on Twitter this morning, Alex Russell said “we’ve been discussing the same.” It is, as Chrome chappie Owen Campbell-Moore said “a difficult UX problem indeed”, which is one reason that Andreas and I parked our discussion. One of Andreas’ ideas is long press on the current page, and then get an option to copy/share the URL of the page you’re currently viewing (this means that a long press is not available as an action for site owners to use on their sites. Probably not a big deal?)

What do you think? How can we best allow the user to see the current URL in a discoverable way?

web.next: Progressive Web Apps and Extensible Web

Here’s the keynote talk I did at Render Conference, Oxford in April. (Slides.)

All the other talks are available. Yay!

I told the nice organising types that I wouldn’t accept the speaker fee because public speaking is my job. Rather than just pocket the money, they suggested we donate it to a worthy cause, which is very good of them.

So I asked them to send it to a rural school in Cambodia, where a friend of mine has been volunteering. They’re building a computer lab to train kids and the local people. In one of the poorest countries on earth (average salary is $80/ month) a second hand laptop at $250 is still a luxury. As someone who was a primary teacher in Bangkok, this ticks all my personal boxes: education, S.E. Asia and the web.

Thank you, Ruth and all at Render Conference.

One weird trick to get online — designers hate it!

At the Google Progressive Web Apps afterparty last night, I had two very different conversations within five minutes of each other.

Conversation #1 went

Hey Bruce, lucky you weren’t at REDACTED conference last week. They were bad mouthing Opera! One speaker said, “Anyway, who cares about Opera Mini?”

In the time it took to drink another 5 bottles of free beer (two minutes), conversation #2 happened:

Oh Bruce, hi. We’ve just raised £100million in funding for our business in Asia, and 35% of our users are on Opera Mini.

What’s the difference? Well, for a start, one was apparently said by a European designer to a room full of European designers, in Europe. The second is the word “users”: the second conversation focussed on the fact that a technology is used by human beings, which is always, always the point.

Now, I don’t care about Opera Mini per se (I’m not its Product Manager). In the same way, I don’t care about walking sticks, wheelchairs, mobility scooters or guide dogs. But I care deeply about people who use enabling technologies — and Opera Mini is an enabling technology. It allows people on feature phones, low-powered smartphones, people in low-bandwidth areas, people with very small data plans, people who are roaming (you?) to connect to the web.

Sure, I get that Opera Mini can frustrate some designers and developers; your rounded corners, gradients and JavaScript-only APIs don’t work. But CSS isn’t content, and a progressively enhanced website will work (albeit more clunkily) with JavaScript throttled after 3 seconds. (I wrote Making websites that work well on Opera Mini if you want more information on how Mini works.)

I ran the stats today. Of more than 250 million Opera Mini users, 50% are on Android/iOS and 50% are on feature phones. The second group almost certainly have no choice in which browser to use to get a full web experience. That’s 125 million people that designer-on-stage doesn’t care about. People like Donald from Nigeria, people like Silma from Bangladesh. People.

The top territories for Opera Mini use are India, Indonesia, Nigeria, Bangladesh and South Africa. Because conversation #2 was about tangible stuff – millions of pounds, and numbers, let’s look at the economic growth of these nations full of interlopers to our WWW (Wealthy Western Web).

Country Population PPP Growth Rate
India 1,251,695,584 $6,300 7.3%
Indonesia 255,993,674 $11,300 4.7%
Bangladesh 168,957,745 $3,600 6.5%
Nigeria 181,562,056 $6,400 4%
South Africa 53,675,563 $13,400 1.4%

(PPP= Gross Domestic Product per Capita, figures from CIA World Fact Book)

Sure, those PPP numbers might be low compared with the home countries of designer-on-a-stage and audience, but how do the growth rates compare? These are dynamic, emerging markets. Who cared about China ten years ago?

If you don’t care about Opera Mini users in these areas, you can bet your competitors soon will.

Dear webdevs, from European Blind Union

Many of you lovely readers aren’t on Twitter 24/7, so heres’s a blog retweet. Or a “re-bleet” as I like to call it.

This was posted yesterday by the European Blind Union (“The voice of 30 million #blind and partially sighted people in Europe”)

In other words: yes, please use viewport meta to make content responsive. But don’t muck around with maximum-scale, minimum-scale, and user-scalable properties, as these restrict zooming.

Couldn’t be clearer, could it? We’ve been asked nicely, by those who are affected, so let’s not do it anymore.

Why we can’t do real responsive images with CSS or JavaScript

I’m writing a talk on <picture>, srcset and friends for Awwwards Conference in Barcelona next month (yes, I know this is unparalleled early preparation; I’m heading for the sunshine for 2 weeks soon). I decided that, before I get on to the main subject, I should address the question “why all this complex new markup? Why not just use CSS or JavaScript?” because it’s invariably asked.

But you might not be able to see me in Catalonia to find out, because tickets are nearly sold out. So here’s the answer.

All browsers have what’s called a preloader. As the browser is munching through the HTML – before it’s even started to construct a DOM – the preloader sees “<img>” and rushes off to fetch the resource before it’s even thought about speculating about considering doing anything about the CSS or JavaScript.

It does this to get images as fast as it can – after all, they can often be pretty big and are one of the things that boosts the perceived performance of a page dramatically. Steve Souders, head honcho of Velocity Conference, bloke who knows loads about site speed, and renowned poet called the preloader “the single biggest performance improvement browsers have ever made” in his sonnet “Shall I compare thee to a summer’s preloader, bae?”

So, by the time the browser gets around to dealing with CSS or script, it may very well have already grabbed an image – or at least downloaded a fair bit. If you try

<img id=thingy src=picture.png alt="a mankini">
…
@media all and (max-width:600px) {
 #thingy {content: url(medium-res.png);}
 }

@media all and (max-width:320px) {
 #thingy {content: url(low-res.png);}
 }

you’ll find the correct image is selected by the media query (assuming your browser supports content on simple selectors without :before or :after pseudo-elements) but you’ll find that the preloader has downloaded the resource pointed to by the <img src> and then the one that the CSS replaces it with is downloaded, too. So you get a double download which is not what you want at all.

Alternatively, you could have an <img> with no src attribute, and then add it in with JavaScript – but then you’re fetching the resource until much later, delaying the loading of the page. Because your browser won’t know the width and height of the image that the JS will select, it can’t leave room for it when laying out the page so you may find that your page gets reflowed and, if the user was reading some textual content, she might find the stuff she’s reading scrolls off the page.

So the only way to beat the preloader is to put all the potential image sources in the HTML and give the browser all the information it needs to make the selection there, too. That’s what the w and x descriptors in srcset are for, and the sizes attribute.

Of course, I’ll explain it with far more panache and mohawk in Barcelona. So why not come along? Go on, you know you want to and I really want to see you again. Because I love you.

Reading List

Apologies for the irregularity of the Reading List at the moment; September and October are autumn conference season and my schedule is bonkers.

Responsive Images

A meeting at Mozilla Paris on how to solve Responsive Images, organised and summarised by Marcos Caceres concluded

  • Browser vendors agree that srcset + DPR-switching is the right initial step forward (i.e., the 2x, 3x, etc. syntax).
  • Agreement to then consider srcset + viewport size after some implementation experience (possibly drop height syntax from srcset spec). If not implemented, Width/Height syntax to possibly be marked at risk in srcset spec.
  • Browser makers acknowledge the art-direction use case, but still think <picture> is not the right solution.
  • Adding new HTTP headers to the platform, as Client-Hints proposes to do, has had negative impact in the past – so Client Hints might need to be reworked at bit before it becomes more acceptable to browser verndors.

So initially, we’ll use something like

<img src="normal.png" 
srcset="retina.png 2x"
alt="otter vomiting">

Browsers that have “retina” displays will choose retina.png as they have 2 CSS pixels to one physical pixel. Browsers that aren’t retina, or don’t understand the new syntax, fall back to the good old src attribute.

WebKit and Blink have implemented (but not yet shipped) srcset, Mozilla is planning implemention now.

Meanwhile, an alternative “srcN” proposal has been put forward by Tab Atkins and John Mellor (excitingly, “John Mellor” was the real name of The Clash’s Joe Strummer). It claims to solve Resolution-based discrimination, Art-direction discrimination and Viewport-based discrimination usecases. Discussion here.

UK Government Web

The Cabinet Office’s Open Standards Board is recommending open standards technology. The first two to be approved are HTTP/1.1 and Unicode UTF-8. Francis Maude, the Minister, allegedly said “open standards will give us interoperable software, information and data in government and will reduce costs by encouraging competition, avoiding lock-in to suppliers or products and providing more efficient services”.

This may not be revelatory to those of us in the web world, but it’s a Good Thing for the nation.

I had the pleasure of hearing Paul Arnett (now of Twitter, previously of gov.uk) talking about the gov.uk initiative at From The Front conference a few days ago, and thought it was a sign of schizophrenia that the same government that can allow subject experts make a world-leading governmental portal is the same government that disregards experts and its own consultation in wanting to censor the web.

I realise now that it’s the old Tory DNA: the belief in encouraging competition by economic liberalism, reducing bureaucracy, while remaining socially authoritarian and reeling from one moral panic to the other. So no change there.

Standardsy Stuff

Misc

Reading List

Here’s your bank holiday reading list!

Reading List

Responsive images

WebKit has (partially) implemented a new attribute to our ancient chum <img> called srcset that allows authors to send a high-res image only to browsers that have high-resolution displays. It looks like this:

<img alt=… src="normal-image.jpg" srcset="better-image.jpg 2x">

That “2x” thing after the file name means that if a browser has 2 or more physical pixels per CSS pixel (eg, high resolution), it is sent better-image.jpg. If it’s not high-res, or if it’s a browser that doesn’t support srcset, it gets normal-image.jpg. There’s no JavaScript required, and it doesn’t interfere with browsers’ pre-fetch algorithms because it’s right there in the markup.

You can extend it further if you want to:

<img alt=… src=… srcset="better-image.jpg 2x, super-image.jpg 3x">

This implementation doesn’t have the horrible “pretend Media Queries” syntax that sources close to Tim Berners-Lee* called “like, a total barfmare, man”, but this is potentially a great leap forward; it saves bandwidth for the servers, stops people downloading gigantic images that they don’t need, is easy to understand and has graceful fallback.

Let’s hope it turns up in Blink, Trident and Gecko soon.

* “sources close to” is UK newspaper code for “we just made it up”.

Graceful degradation of SVG images in unsupporting browsers

Very very clever: SVG and <image> tag tricks. (Yes, <image> which the HTML5 parser aliases to <img>.)

Microdata / RDFa / “semweb” shizzle/ SEO

In The Downward Spiral of Microdata, nice Mr Manu Sporny predicts the death of “HTML5” Microdata and the triumph of RDFa Lite now that both WebKit and Blink have dropped support for the Microdata API (which allowed JS access to Microdata).

Co-incidentally, Mr Sporny is an inventor of RDFa Lite. Personally, I don’t care which triumphs – now only Opera Presto supports the Microdata API, there is no technical reason to prefer one to the other (in fact, as Facebook supports RDFa and not microdata, so you could argue it has greater utility).

Save bandwidth with webP – soon with fallback!

A long time ago, “responsive” didn’t mean “resize your browser window repeatedly while fellow designers orgasm until they resemble a moleskine atop a puddle”. It simply meant “Reacting quickly and positively”, meaning that the page loaded fast and you could interact with it immediately.

One way to do this is to reduce the weight of the page by serving images that have a smaller file-size, thereby consuming less bandwidth and taking less time to download a page. In the last year, web pages download approximately the same number of images, but their total size has increased from about 600K to 812K, making images about 60% of the total page size.

One way to reduce this amount is to encode images in a new(ish) format called webP. It’s developed by Google and is basically a still version of their webM video codec. Google says

WebP is a new image format that provides lossless and lossy compression for images on the web. WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller in size compared to JPEG images at equivalent SSIM index. WebP supports lossless transparency (also known as alpha channel) with just 22% additional bytes. Transparency is also supported with lossy compression and typically provides 3x smaller file sizes compared to PNG when lossy compression is acceptable for the red/green/blue color channels.

Opera uses it precisely for this compression; it’s used in Opera Turbo, which can be enabled in Opera desktop, Opera Mobile and the Chromium-based Yandex browser. This transcodes images on-the-fly to webP before squirting them down the wire and, on slower connections, it’s still faster.

In tests, Yoav Weiss reported that “Using WebP would increase the savings to 61% of image data”.

WebP is currently supported only in Opera (Presto), Google Chrome, Yandex and Android Browser on Ice Cream Sandwich, which makes it difficult to deploy on the Web. Firefox doesn’t like it and IE hasn’t said anything (I wonder if the new confidence about technologies in the VP8 video codec on which it’s based might make them feel better about it?)

However, there’s some handy new CSS coming to the rescue soon (when browser vendors implement it). We’ve long been able to specify CSS background images using background-image: url(foo.png);, but now say hello to CSS Image Values and Replaced Content Module Level 4’s Image Fallbacks, which uses this syntax:

background-image: image("wavy.webp", "wavy.png", "wavy.gif");

(Note image rather than url before the list of images.)

The spec says “Multiple ‘image -srcs’ can be given separated by commas, in which case the function represents the first image that’s not an invalid image.”

Simply: go through the list of images and grab the first you can use. If it 404s, continue going through the list until you find one you can use. Note that this isn’t supported anywhere yet, but I hope to see it soon.

[Added after a reminder from Yoav Weiss:] It needs finessing too; Jake Archibald points out “If the browser doesn’t support webp it will still download ‘whatever.webp’ and attempt a decode before it’ll fallback to the png” and suggests adding a format() qualifier, from @font-face:

background-image: image("whatever.webp" format('webp'), "whatever.jpg");

But what about old [current] browsers?, I hear you ask. Give them the current url syntax as fallback:

background-image: url("wavy.gif");
background-image: image("wavy.webp", "wavy.png", "wavy.gif");

Now all browsers get a background image, and those that are clever enough to understand webP get smaller images. Of course, you have to make a webP version (there are webP conversion tools, including a Photoshop plugin).

It seems to me that the spec is overly restrictive, as it seems to require the browser to use the first image that it can. webP is heavily compressed so requires more CPU to decode than traditional image formats. Therefore, I could imagine a browser that knows it’s on WiFi and using battery (not plugged in) to choose not to use webP and choose a PNG/ JPG etc to save CPU cycles, even though the file-size is likely to be larger.

What about content images?

Of course, not all images on your webpages are CSS background images. Many are content images in <img> elements, which doesn’t allow fallbacks.

There is, however, an HTML5 element that deliberately allows different source files to get over the fact that browsers understand different media formats:

<video>
<source src=foo.webm type=video/webm>
<source src=foo.mp4 type=video/mp4>
... fallback content ...
</video>

Wouldn’t it be great if we could use this model for a New! Improved! <img> element? We couldn’t call it <image> as that would be too confusing and the HTML5 parser algorithm aliases <image> to <img> (thanks Alcohi). So for the sake of thought experimentation, let’s call it <picture> (or, if we’re bikeshedding, <pic> or —my favourite— <bruce>). Then we could have

<picture>
<source src=foo.webp type=image/webp>
<source src=foo.png type=image/png>
<img src=foo.png alt="insert alt text here"> <!-- fallback content -->
</picture>

And everyone gets their images, and some get them much faster.