When it comes to website development, Quality Assurance is an area where seemingly reasonable expectations can far exceed a seemingly reasonable budget. Clients paying an outside firm or working with in-house team to build their website typically want it done as a stand-alone project with a fixed budget and deadline, and expect everything to “just work”. These are perfectly reasonable expectations, but there’s a lot of nuance to what “just work” means in the context of your custom built website. And the space within that gray area can often mean meeting, doubling or even tripling your budget and timeline.
Software quality is not a pass-fail test, but is measured in degrees, such as the number of issues found or percentage of users experiencing errors. Getting those numbers as low as time and budget allow is the goal of every software project manager. Getting them to zero gets time-consuming and expensive fast. This post looks at the website Quality Assurance gray area and outlines some of the processes and associated cost that go into making a site meet quality expectations.
Defining “Just Works”
Before we get into the various activities associated with levels of testing, let’s consider what it means for a website to “just work”. Just because a website works for a home user in Chicago at 2PM using Chrome on a Windows 7 machine with a 17” monitor doesn’t mean a different page of that site will work for a corporate user in France using Firefox on a Mac with a 24” monitor. Consider some of the variables that may be considered when testing a website:
- Browser type (Chrome, Firefox, Edge, Safari, IE, Opera) – Wikipedia lists 11 different browsers released within the past 5 years.
- Browser version – Chrome alone is on version 66, and most browsers have at least three versions in active, common use.
- Network environment – Is there a corporate firewall, internet filter or proxy server in place? Is the user browsing on dial-up, a cellular connection or broadband?
- Operating system and version – There are several supported versions in use of both Mac OS and Windows, not to mention the dozens of popular Linux flavors. And don’t forget mobile operating systems.
- Screen size – A modern website is expected to look great on dozens of screen sizes ranging from hand-held devices to TV sized monitors.
- Date, time and location – Depending on the functionality of your site, date, time and location can add hundreds of additional variants to a user profile.
- Feature or page – All of these variants may interact differently with various templates, pages or features on your website.
Now, count up each of those variants and then multiply them by each other. Some quick work on a calculator will show you how quickly, and exponentially, the number of possible test cases grows for even a relatively simple website. And remember that the functionality of your site itself needs to be tested, too, regardless of the end user variants. Consider how many combinations of interactions, user flows and input data exist for even a relatively simple web form. And multiply. So, if testing every possible variant is an unreasonable expectation, which variants and combinations should be tested? Thinking about your website’s Quality Assurance this way helps us start to put a box around “just works”.
The remainder of this post discusses high level strategies employed by web development teams to test your site and tips to determine whether they’re appropriate for your project.
Smoke testing is the duct tape of software quality assurance. It basically means the team clicks around the site on various devices (perhaps picking a haphazard selection of those variants above) to make sure that it basically works. They’re looking for glaring issues or specific problems that are known risks. Like duct tape, smoke testing is fairly effective, inexpensive, fast to deploy and, in some cases, good enough. This is definitely the most common form of quality assurance website testing and is also the least expensive and time consuming.
Smoke testing is typically budgeted as a percentage of project cost. If a project includes 100 hours of development, we might allocate anywhere between 10% and 25% additional for smoke testing if more formal testing is not called for.
Test Plan Creation
The next step above smoke testing is more structured testing using a test plan. It is a huge step forward because there is documented agreement between the developer and client around what will be tested, how, and with what variants. Test plans also provide repeatability to otherwise unstructured smoke testing, so that you can fully or partially retest an application if changes are made. Test plans can range significantly in complexity and length, from a specification document that defines expected behavior and variants to detailed step-by-step scenarios and expected results that cover each variant, feature and template on the site.
Like smoke testing, test plan creation and implementation can be budgeted as a percentage of development effort, or it can be estimated as a stand-alone sub-project based on requirements. Unlike smoke testing, test plan creation cost is impacted by the number of variants included. It goes without saying that running a test plan on 10 devices takes 10 times longer than running it on 1. When included in a project, the test plan is a deliverable and should be reviewed and approved as with any other project deliverable.
If your test plan will need to be executed dozens or hundreds of times, doing so manually quickly becomes a significant expense. At some point, it is cheaper to write scripts that run the tests automatically and report on the results. This is referred to as automated testing. Developers use special tools like Selenium to build, maintain and run these tests.
From a cost perspective, a large suite of automated tests is an expensive and relatively complex project in its own right. You’ll want to make sure the developers you’re asking to automate your test plan have experience with the appropriate tools. And you’ll still need a test plan to guide what will be automated. While automated testing makes sense for a dedicated team that will be iterating substantially over a long period of time on a software product that is time consuming to functionally test, it may not make sense for a marketing site with a shelf life of 3 years.
If developers have to write a function that tests and validates every function they write, you can see how this practice would substantially increase the time spent writing code and the number of lines of code that must be maintained going forward. And, yes, even the unit test code can have bugs and must be tested by the developer. As such, imposing a requirement of unit testing to a software project can almost double the cost. Proponents of unit testing use it as part of a “test driven” development methodology and consider it more of a design and process tool than a quality tool. For this reason, unit testing is not recommended for the sake of quality control alone.
Load testing is the use of automated tools (similar to automated testing tools) that don’t just test your site with one user, but can run that same test (or a variety of tests) with hundreds or thousands of users, each with unique data inputs. Load testing is useful if your site is expected to experience substantial spikes in traffic or if performance under load is a concern. For example, if you are going to send out an email campaign to 150,000 customers asking them to complete a task on your website, you want to make sure the site can handle the kind of traffic spike that will generate. Load testing is also a valuable diagnosis tool if your site only experiences problems under heavy load.
Like automated testing, load testing can be a fairly substantial project in its own right. You need to start with a test plan that defines the path or paths you want the automated users to take, the data that will vary between users, how many users will be tested at once and for how long. If you’re considering load testing to solve a performance issue, be sure to consider the cost of additional hardware if needed. Hardware is usually less expensive (at least in the short term) than formal load testing and performance remediation.
Finding a Balance
With all of these testing methods available, choosing the appropriate level of quality assurance testing for your website project is more art than science. Consider these questions when determining the level of testing you ask for as part of your next web project.
Given your scope of work, how much testing do your timeline and budget realistically allow for?
Discuss testing options in context of timeline and budget with your vendor or web team to find a working balance.
Is the nature of your site such that you’re willing to forgo functionality for additional testing?
There’s a time and place for exciting new functionality that wows users, even if it’s not perfect for everyone. And there are circumstances where bugs are unacceptable and a simpler application that is more reliable is needed.
Who will be using the site and under what circumstances?
A thorough understanding here can help to hone and reduce the number of variants you need to test for. If you’re replacing an existing website, Google Analytics is invaluable to answer these questions.
Will this website undergo frequent functionality changes?
Content changes don’t require lots of testing, but if you’re rapidly changing and adding custom functionality, the incidence of bugs will rise quickly without formal and repeatable testing in place.
Hopefully these points provide some context and clarity around how various quality assurance testing methods fit into your next website project. Happy testing!