Device Detection vs Responsive Web Design

This week’s reading list is devoted to Device Detection vs Responsive Web Design.

With all the cool kids getting into RWD these days, it’s time to have a look at the Device Detection companies again. Device Detection is the practice of matching a device’s UA string against a table of such strings and looking up certain characteristics of that device and then serving different websites accordingly.

Of course, the utility of such services is dependant on the quality of the look-up table: how many devices does it know about (all the ones in the world, ever?), how frequently it’s updated (have they added the Umbongo J2O TrouserPhone S+ that was released on Tuesday, yet?) and how accurate is that data (does the TrouserPhone S+ really have 178680979 X 7 pixel smellovision display?).They are, however, an order of magnitude more reliable than terrible CMS plugins or JavaScripts that were written years ago and which register IE11 as IE1, or don’t know Chrome exists. UA strings are comically unreliable, being the frontline in an unceasing battle between browser-sniffers who want to deny entry to certain browsers, and browser vendors who want their users to get a first-class experience.

Examples of Device Detection companies include scientamobile (WURFL), DeviceAtlas and 51Degrees. The databases owned by such companies do include device characteristics unavailable through client-side detection. For example, you can’t find out from JavaScript whether a device is actually a touchscreen device; the physical dimensions of a screen or the retail price of a device (which advertisers want to know, apparently – you only want to advertise yachts to gold iPhone or Umbongo J2O TrouserPhone S+ owners).

Mike Taylor, an ex-colleague of mine at Opera, now at Mozilla (and pathological hater of chickens) set up a collaborative document to collect use cases that people are trying to solve with UA detection (which can’t be solved by feature-detection), which is summarised by Karl Dubost (ex-Opera, now Mozilla) in User Agent Detection Use Cases.

Those who oppose Device Detection do so for philosophical reasons – it’s one web and we shouldn’t serve different content to different devices or browsers, or they are certain browser vendors: Internet Explorer, Firefox OS and Opera all have reasons to dislike browser sniffing or device detection (“this website is only available to iPad users”). Google uses device detection all the time on its properties, as do many other large companies.

The device detection companies have begun to issue reports comparing their products with responsive, client-side techniques. Here are three that I’ve seen this week:

They’re worth reading. Of course, case studies only go so far; every business, territory and site is different. One thing everyone agrees on is that performance matters – slow sites lead to fewer conversions. mobiForge has an article M-commerce insights: Give users what they want, and make it fast that claims

RWD sites were the slowest, on average, to load on mobile – 8.4 seconds – while dedicated mobile sites loaded fastest – in 2.9 seconds. Non-responsive desktop sites took 6.57 seconds to load.

I’d like to see proper A/B testing: a well-made responsive version of a site versus its “m-dot” equivalent, redirected from its canonical URL and assembled after a device look-up, across a variety of devices and network conditions. If we’re going to argue, it might as well be about data.

Update 1 Dec 2014: Here’s some initial research on the top 1,000 mobile websites: M dot or RWD. Which is faster? that concludes that “m dot” sites are 50% slower for time to first byte, and

RWD sites are VERY competitive on Visually Complete and SpeedIndex scores. The median values are within 5% for both metrics. Even though it appears that RWD is faster, there is enough fluctuation in the data that we should probably call it a dead heat.

16 Responses to “ Device Detection vs Responsive Web Design ”

Comment by Patrick H. Lauke

I’ve advocated previously (hah, having flashbacks to my preso from 2012 Adapt and respond) that having a separate site is still a viable choice. It’s simply another tool/option. Also, it’s not necessarily a binary decision: even once you do stuff based on browser detection server-side, including moving the user to another subdomain, you should still use responsive techniques. And of course offer the user a way to get back to the “non-mobile-or-whatever-your-script-decided” version.

Comment by Ronan Cremin

Device detection is often characterised as a kind of restriction or artificial barrier. It may have been in the past but these days server side adaptation is used to deliver an improved experience. Companies like Google use it because it enhances their ability to serve their customers, in all territories, on all devices. Like how Google slighly adjusts line heights on your tablet to make targets easier to tap? It really helps. When you download Opera you’re automatically directed to the right download link. That helps too.

Yes, not all companies do it well and mistakes are made but overwhelmingly, server-side detection is used to serve visitors, not hinder them.

The one web concept is hardly meant to be taken literally. It’s worth remembering what TBL said on the subject in the HTTP 1.0 spec in RFC1945, section 10.15 (emphasis mine):

The User-Agent request-header field contains information about the
user agent originating the request. This is for statistical purposes,
the tracing of protocol violations, and automated recognition of user
agents for the sake of tailoring responses
to avoid particular user
agent limitations.

A rookie mistake in HTTP 1.0? Nope – it’s in there again in HTTP 1.1 as specified in RFC2616, years later.

The W3C stance on the matter is worth noting too (emphasis mine):

One Web means making, as far as is reasonable, the same information and services available to users irrespective of the device they are using. However, it does not mean that exactly the same information is available in exactly the same representation across all devices. The context of mobile use, device capability variations, bandwidth issues and mobile network capabilities all affect the representation. Furthermore, some services and information are more suitable for and targeted at particular user contexts.

If you take the one-web concept literally then a page that works perfectly well on some devices will completely fail to load on others—hardly a desirable outcome. Some browsers that are still out there have an 8KB max page weight.

As always, the reality of the real world gets in the way of platonic ideals and some flexibility benefits everyone. TBL certainly seemed to think so, as does the W3C.

Comment by Stephen

I’m using device detection for RESS on an m-commerce site and am able to to compare conversion with a design which only has responsive, but I’m not able to share the results.

I’d encourage people to be open minded and test, you’re spot on to raise this.

Comment by Tom Maslen

There’s one issue with device detection the companies offering it as a service never chat about: scale.

The main benefit of RWD is that you write a feature once and its available to everything. This doesn’t benefit the individual user but it does benefit your organisation. RWD allows you to scale up to meet any type of device in the future, its forward looking. Device detection is about reacting to whats available now, its backwards looking. Would a device detection strategy have predicted and catered for the iPhone 6 Plus?

Another way to look at scale is the amount of traffic your site can get. Device detection works, especially RESS and it is an option for you, but you must understand that this strategy means EVERY request you get has to be handled by your server side code. If your site is taking millions of requests a day your infrastructure better be able to handle it, or you need to be able to afford AWS auto scaling up to meet that level of demand.

RWD gives you the ability to ramp up the amount of traffic your site receives without having to fundamentally change your tech strategy because its compatible with server side caching and is CDN friendly.

Scaling is a fundamental advantage RWD has over device detection. I would never disregard device detection out of hand, but people making business decisions about long term technical consequences have to understand this. When thinking about traffic, always times your estimations by 10 and then look at your architecture to see if it can take that load. Pointing at Google and saying “its good for them so its good for me” is short sighted. Google has hundreds of millions of dollars and world wide infrastructure. They can afford to do device detection, but the real question is can you?

Comment by Tom Maslen

And another thing (sorry if I’m being too ranty):

Comparing total download size between a RESS based site and a purely responsive site won’t quite give you the best idea of what is more performant. Perceived performance is more important to the user than actual performance.

A RESS based site might produce a smaller total payload, but it will involve making multiple round trips to the server. RWD can easily download more than its needed, but downloading a bit more than is necessary can be faster than making multiple requests to the server. The Guardian’s strategy of inlining all the CSS required for the top part of the content is a great example of this. Most of the page loads while you’re interacting with the top half of the page and so you never notice that the page is still loading.

Comment by Bruce


“Comparing total download size between a RESS based site and a purely responsive site won’t quite give you the best idea of what is more performant. Perceived performance is more important to the user than actual performance.”

I absolutely agree. (Opera has lots of experience optimising perceived performance in our browsers.)

I’d hope that A/B testing would take account of this. It would be great for real data to inform the debate (rather than data cherry-picked to support an argument from the beginning.)

“ou must understand that this strategy means EVERY request you get has to be handled by your server side code. If your site is taking millions of requests a day your infrastructure better be able to handle it”< yes, this concerned me when I was tech lead for a large website - and that was before the days of the huge rise in mobile browsing (eg, 2008).

Comment by Luca Passani

Hi Luca Passani of WURFL (and ScientiaMobile’s CTO) here.

It is obvious to me that RWD has huge advantages for your average company who intends to do a better job at supporting mobile. RWD (and the fact that the majority of users have a smartphone!) have enabled websites to be visualized on mobile phones in a way that was previously only possible through the deployment of a separate m.* site and a big pain in the backside called “Device Detection” (which historically people have been doing with WURFL from 2002 and all the way to today).

So, I agree with Tom when he says that RWD scales better than device detection. The question comes when a (larger) organization with the resources to go the extra mile is willing to use more resources to deliver a service that is harder to deliver with a pure client-side approach. In that case, device detection is valuable. And it is for one obvious reason: there is *nothing* in RWD that prevents it from benefit from server-side optimization. You can use RWD and still decide to optimize with Device Detection without losing any of the advantages of RWD. Our own company website is RWD with a few server-side optimizations that make it work on browsers and devices that will normally crash and burn in the presence of a RWD site (see it work on Opera Mini if you don’t believe me, or a Symbian device with one of those crippled webkits).

I’ll go one step further. Many (most?) websites that the RWD front-figures rightly bring forward as great example of “state-of-the-art responsiveness” will look at the user-agent here or there to make sure that this or that issue goes away for owners of this or that device. I imagine that a directive came from the upper floors to simply “fix it”, and no high-faluting discussion on the pureness of the web has been considered a good enough counter-argument 🙂

At the end of the day, this shouldn’t be a religious discussion. RWD is cool. It solves a problem for many. Some have specific needs and may want to go the extra mile irrespective of the relatively higher cost.

Finally, the WURFL team has always been close to the need of mobile developers. ScientiaMobile created developer tools (available at no charge) that play well with RWD. In short: WURFL.js, WURFL Image Tailor (WIT) and the MOVR report, all available at

Thank you

Luca Passani

Comment by Scott Jehl

Thanks for this post, Bruce.

I think it’s a sign of progress that feature-based approaches are now typically referenced as an ideal, and that the discussion now focuses on particular cases where device-agnostic code falls short of our needs, or the cases where device detection can be used to optimize an existing site instead of driving its support strategy. It’s even more encouraging to find that those particular cases where device-specific code is necessary seem to occur less and less with every site I build.

It’s funny that sometimes “knowing” something very particular about a device can often lead to error-prone logic. For example, you mentioned, “you can’t find out from JavaScript whether a device is actually a touchscreen device…”, and while that’s true, I find that “knowing” that rarely helps. Most—all?—touch-supporting devices have alternate non-touch means of interacting with web content that we absolutely need to support. There are more devices out there that aren’t solely touch, as you know—and even if we “know” it’s a touchscreen, we need to normalize our listeners for other events. It turns out we really need to deliver code that can react to the unexpected, not just the conditions listed in a stock UA profile.

That said, we do use device-specific logic—but it’s used a fallback, when we know a feature-based approach will fall short. These cases do happen, but it’s important to note how rare they are. We might have one or two features of an entire site that benefit from a browser-specific workaround, and even then it’s usually because we chose to use a notoriously-risky feature like position: fixed or overflow. That approach, coupled with some strong mustard-cutting, is a huge win for us: we can deliver accessible sites, and spend our time on more maintainable solutions.

Broadly speaking, I find it downright troubling when I hear that device-detection or “RESS” is somehow required to deliver a high-performance site. In my and Filament’s experience, that’s simply not the case. Fortunately we’re seeing more and more examples of approaches that require no device-specific code to deliver incredibly fast sites, that even work great in browsers like Opera Mini (cited earlier). All without any device detection library required.

I’m so thankful that in the past year or two, we’ve gained great tools like to analyze this stuff and find where the real bottlenecks exist.

Comment by Stephen

Oh dear, I hesitate to reply again. But,

“Would a device detection strategy have predicted and catered for the iPhone 6 Plus?”

Yes it should. I think there’s a misunderstanding. If your strategy includes capability detection, it isn’t a substitute for a responsive layout nor will it magically make anything faster – these are expectations of anything that’s built in 2014 right? In our case it gives us a better understanding of our customers intentions. It won’t work for everybody (you do need the resources) and we don’t include it in everything we do by default – but it’s one of the tools in our box. We think of it as progressive enhancement, build something that works for everyone and according to the capability of the device enhance it. It must be a measurable enhancement.

“It’s funny that sometimes “knowing” something very particular about a device can often lead to error-prone logic.”

The point about errors in the design of device detection is true, but errors in media queries or js are just as likely to cause poor experience for users. The design should be completely free of errors right? I believe the problem is that our experience of design for devices isn’t mature, rather than device detection being error prone per se.

Comment by Jon Arne S.

Thank you for putting this discussion in the spotlight again. Great comments too.

There are many signs of progress on our way to make the web mobile. I am happy to see that Scott refers to feature based approaches as “an ideal” and not “the ideal”. What is ideal and not depends. Depends on too many factors to mention here. We all know this, I think.

I also agree that the cases where we for some reason need to “multiserve” or create device specific code occur more rarely than before. Having said that, we see a lot of new use cases for DDRs related to web development, analytics, infrastructure and the online business model in general. In my experience (Disclaimer: I now work for ScientiaMobile), the desire to know stuff about the device/browser, especially server-side, has never been greater. One example is that device intelligence is used to analyze user behavior, you might call it “big data”, to streamline things like the content strategy and design or technology choices.

So, device detection does two important things; 1) Helps you identify the “ideal solution” and 2) may help you implement the “ideal solution”. There has never been a binary RWD vs. DDR question. The question is how is best mix between server side logic and client side logic. And honestly, it is about time that the “cool kids”, as Bruce says, start to think of the web server as something more than just a place to store JavaScript and CSS.

Comment by Andrew Betts

There’s an implication here that device detection == doing unspeakable things with user agents, and that this is an alternative to feature detection / RWD. What we’re finding at the FT is that the ideal requires both.

There are things that device detection can offer, and UA recognition can be reliable, as long as it’s done well, kept up to date, and that the default case is to assume support for everything. The thing that typically give UA sniffing a bad name is only allowing users to see the good stuff if they have a UA string that contains some magic word. This of course leads to browsers with UA strings that are simply all the magic words combined together.

We use UA sniffing / device detection for our polyfill service ( This is a great example of how we can use device detection to enhance the “one web” ideal – by making more browsers capable of interpreting the same content.

We couldn’t use feature detection for this, because adding a missing feature might require any one of a number of different possible implementations based on exactly what building blocks are available in that browser for us to build on.

Finally, on Tom’s point about scaling – he’s right in principle, but there are lots of things you can do to mitigate such problems, I wrote about some of them in this blog post I’m about to shamelessly plug:

Comment by Mark G

Device detection unfortunately is still essential, if undesirable. Feature detection is severely limited by the fact a browser may report having such a feature, and indeed may officially support it, but that support is so badly mangled as to be unusable. Feature detection in that case is not only useless, it’s a liability.

Comment by Tom Maslen

+1 on everything Andrew Betts just said. My annoyance isn’t so much with the technique as the way the technique is portrayed by the companies bigging it up.

When it comes to content negotiation I wouldn’t use a UA string to make a big decision like “fork the users into two groups” but I would consider it for adding to the base experience. I’m shamefully in favour of sniffing for latest iPhone/Android devices to conditionally add a USP to a product.

Also note: all my experience is based on making large scale news websites, I’ve never built a big SPA before, so my thoughts are skewed towards that kind of product.

Device detection unfortunately is still essential, if undesirable. Feature detection is severely limited by the fact a browser may report having such a feature, and indeed may officially support it, but that support is so badly mangled as to be unusable. Feature detection in that case is not only useless, it’s a liability.

That’s true until about IE9. Modern browsers are very standards compliant, and although they still have bugs and missing features, feature detection is very reliable. Mustard cutting + additional feature detection will help you.

Comment by Luca Passani

Tom, I am not sure about who you are referring to when you write “my annoyance isn’t so much with the technique as the way the technique is portrayed by the companies bigging it up”, but since I work for a company that offers Device Detection, I sort of feel like I am being involved here.

It seems to me that a general agreement has emerged here that RWD and client-side are great, but detecting the device/browser is a necessity at times.
You talk about Cut the Mustard? very good, what if I told you that this is a variation of Cut the Mustard found in the wild?

if(‘querySelector’ in document
&& ‘localStorage’ in window
&& ‘addEventListener’ in window
&& window.operamini === undefined
&& ua.indexOf(“Series 40”) > -1
) {
// bootstrap the javascript application

As usual, in theory there is no difference between theory and practice, in practice there is.

Anyway, I don’t really perceive the value in continuing a discussion in which everyone fundamentally agrees and disagreement, if at all present, is largely confined to how much device detetion is necessary, i.e. an aspect that will vary based on project, developer experience/comfort level with programming paradigms, long term support and so on.
If device detection will become unnecessary one day it will be because of its death for natural causes, and not because you have declared it dead 😉

Shall we call it quits?

Comment by Sean Burton

+1 for a combined approach. Device detection to load relevant structure (divs, navigation, etc) and the RWD to the style and layout the page. Inline css can be delivered via device detection too, which allows you to optimise the total page weight and the rendering.

All that said, you also need to take into consideration the time required to maintain and load new content.

Obviously worth reviewing your Analytics to look at volume of different devices to see which strategy is likely worth progressing. Additionally by having a clear view on what a successful experience / journey looks like, it’ll help prioritise the approach.

Know your customers well and design accordingly – customer first, not mobile first!

Comment by Ronan Cremin


Surprised to hear that detection causes scaling concerns. On a machine where Apache can serve X reqs/sec I would expect any device detection solution to be able to do at least ten times that rate and hence should have only a slight effect on the overall ability of your server to handle traffic. Couple that with the ability to reduce what’s actually sent over the wire and it may be a net positive overall.

To check this I tried a few quick tests here with Apache Bench, NGINX and DeviceAtlas. Serving a light dynamically-generated page page was about 5% slower with device detection than without, but a less trivial example would decrease the detection overhead as a percentage of overall request processing time.

I ran tests 3 times, 10K requests, 10 concurrent threads against NGINX 1.7.5.

Leave a Reply

HTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> . To display code, manually escape it.