As an everyday user, when you encounter a bug, you don’t usually stop and think, “Maybe this issue is only happening on Chrome version 89 on Windows 10!” And why should you? Most people expect websites and apps to work properly on their system – especially if they’re not using an esoteric old version of Internet Explorer.Testing across browsers showing different screen sizes

If you work in software development, it’s a good idea to remember that to a user encountering it, that “edge case” bug is everything. But how do you make sure you discover these environment-specific bugs?

And even after finding them, how do you prioritize them? After all, your app or website is never going to look completely perfect for 100% of possible device/browser/screen size combinations. As with most things, finding a balance is key.

Read on to find out more about why cross browser testing is important – and how to do it right.

There can be major bugs that only affect certain environments

The word “major” is key here. This is the most important reason to do cross-browser/device testing. If all environment-specific bugs were as minor as, “the browser bar icon is missing in Firefox,” it might not be worth the time and effort of testing across browsers/devices. But the reality is that a device-specific bug could mean many of your users couldn’t even open the app without it crashing. Just because a bug isn’t on all browsers/devices doesn’t mean that it isn’t a critical issue.

How to prioritize testing coverageMan in wheelchair using tablet to display graphics and statistics

As mentioned above, you’re never going to be able to test every single device and browser version in the world. Even the most successful apps and websites have restrictions about which devices they formally support. For example, if you look at an App Store or Play Store listing, you may notice “requires iOS 14” or observe that attempting to install on a 6 year old Motorola phone doesn’t work.

But how do you prioritize which environments to spend your limited amount of testing time on? The most important factors include:

  • Market share. For example, if you know that 70% of your users are using the latest version of Chrome, you can dedicate more of your testing time accordingly.
  • Budget. If you have an unlimited QA testing budget of millions of dollars per sprint – congrats! But if you’re like most companies, chances are you have a finite amount of testing hours budgeted.
  • Timeline. If you had six months to test a new website launch, you could probably test on every browser version that has ever existed. But when you have six days, you can only fit in so much coverage.
  • Customer base. Often customer base is similar to market share. But sometimes your users are in specific types of demographics. For example, if your product is mainly used by people with low income, it’s less likely that the latest, most expensive iPhone model will be as prevalent among your customer base.

Recommendations for coverage

Market share and customer base are similar ways to figure out the best testing coverage. If you haven’t formally launched yet, you’ll likely need to rely on general market share. You can still take into account your specific customer base, but a little more guess work will be involved.

Image of man showing graphs on how to prioritize browsers for testingFor example, you might be targeting a certain type of customer. But that doesn’t mean that most of your users will end up being in that group. Overall, though, you can look up market share based on your target market, and go from there. For example, let’s say that you plan to have users mostly in the United States. If you research browser market share within the US and find that 50% of Americans use Chrome, this will be helpful in determining coverage.

If you have an existing (and sizable) customer base, you can make this decision with even more precision. For example, if you have App/Play Store consoles or Google Analytics, you can see the percentage of users that are on each browser/device. That doesn’t mean that you’ll want to split up testing exactly by those percentages, but it’s a good starting point.

One reason that you might not want to split up testing in the exact way your users are split is that some environments are more likely to have bugs than others. Let’s say that you have 40 hours of testing available, with users divided as follows:

  • 40% on Chrome desktop
  • 8% on Safari desktop
  • 5% on Firefox desktop
  • 4% on Edge desktop
  • 20% on iPhone Safari
  • 7% on iPad Safari
  • 12% on Android Chrome
  • 4% on Android Samsung browser

Does that mean you would spend 40% of testing hours on Chrome, 8% on Safari desktop, etc? Not necessarily. It will probably take the same amount of time to test a site in desktop Chrome as it would in desktop Safari. In addition, Chrome is less likely to have bugs than many other browsers. A large part of testing time goes to trying to reproduce, and writing up, bug reports. So sometimes browsers/devices with more bugs than others will end up eating up more of the testing time.

Essentially, you’ll want to make sure that you put the most focus on testing a balance of the browsers your users most commonly use and the ones that are most likely to have bugs.

Cross Browser Testing Checklist

Every app or website might have slightly different coverage needs. But it’s helpful to have a go-to checklist for cross browser testing. Here’s what we generally recommend when testing websites:

  • Chrome (most recent version – Windows/Mac)
  • Safari (most recent version –  Mac)
  • Firefox (most recent version – Windows/Mac)
  • Edge (most recent version – Windows)
  • Desktop resolution sizes 1920 x 1080, 1366 x 768, and 1440 x 900
  • Large iPhone (for example, iPhone 12 Pro Max, iPhone 11 Pro Max, iPhone 8 Plus) Safari
  • iPhone X Safari
  • Small iPhone (for example, iPhone 12 Mini) Safari
  • iPad Mini Safari
  • iPad Standard Safari
  • iPhone Pro Safari
  • Samsung Galaxy standard size (for example, Samsung Galaxy S21) Samsung browser
  • Samsung Galaxy large size (for example Samsung Galaxy Note 20 Ultra) Chrome
  • Google Pixel (most recent version) Chrome

Of course, if you have time and can cover additional versions, devices, and screen sizes – great! But the above should provide a reasonably thorough a mount of coverage for moderate budgets.

Should you still include IE 11?

Mention “Internet Explorer,” and you’ll probably elicit a knowing or frustrated groan from the average person in (or out of) tech. There’s good reason for this – it’s notoriously the most difficult browser to optimize websites for. Even if a website looks perfect on Safari, Chrome, Firefox, and Edge, chances are it will have some bugs on Internet Explorer.

Internet Explorer logoLuckily, more and more companies are phasing out support for Internet Explorer. IE now has a market share of less than 1%, and it’s dropping by the day. As a result, we recommend dropping any formal test coverage on IE 11. It isn’t usually worth the money and time involved to spend hours testing and reporting bugs that are only on Internet Explorer.

One way to find a nice balance is to do a cursory check on IE 11, but not any in-depth testing. That way, your team will be aware of any major issues ahead of time, and be ready to respond if customers write in. At the same time, you won’t have wasted tons of extra hours looking through every nook and cranny of the site in Internet Explorer.

When you DON’T need to do cross browser testing

There are times where cross-environment testing doesn’t need as much coverage as others. For example, let’s say that you have an iOS app, with 50% of your users on iPhone 7, and 50% on iPhone 8. (Not the most realistic split, especially in modern years – but for the sake of example!) While you’d still want to test different versions of iOS, you wouldn’t need to devote the same amount of testing on both iPhone 8 and iPhone 7. Why? Because the devices are so similar in size/functionality that it would be very uncommon for there to be a bug on one and not the other.

Other common scenarios where you don’t need to duplicate testing on all supported environments:

  • Checking that links go to the right page. This only needs to be done once per platform. For example, if the iOS and Android apps are coded differently, you’ll want to check the link in each. But on a website, you typically only need to check it once. There are some rare cases where link behavior might be slightly different across browsers. For example, maybe it automatically opens in a new tab in Safari, but in the same tab in Chrome. Or perhaps it displays the linked PDF in a webpage view in one, and causes an automatic download in another. But generally speaking, if the link goes to the right page on one browser, it will do the same on others.Woman finishing testing across devices and leaving phone
  • Testing the same browser  at full coverage on both Mac and Windows. This may seem like it goes against the recommended cross browser testing checklist above. But we’re not suggesting that you shouldn’t test Chrome/Firefox on both Windows and Mac – just that they don’t need an equal amount of testing duplicated.

In addition, if you’re testing software for a client, it’s important to respect their preferences on cross browser testing. You can provide any helpful information and guidance on test coverage. But at the end of the day, they may not choose to have you test across all browsers due to budget – or any of the other reasons above.

Need help with cross browser testing?

If you have an app or website and need it tested across browsers and mobile phones, we offer on-demand QA testing services. Contact us to find out our on-demand hourly rates.