An explanation for how to best handle a common test design challenge related to dynamic cardinality.
“Select All That Apply” scenarios occur frequently in web applications and are typically mishandled by test designers. Let’s look at the following example for some context:
As you can see, you have many topping choices available. You can choose as many or as few as you would like, and there are no guidelines surrounding the maximum number of toppings you can add (though it may get a bit pricey in reality). When facing a situation like this, the first test design strategy that occurs to many test designers is to create a single parameter called “Pizza Toppings” in DesignWise and list each of the 24 available toppings as Values, as shown below.
Avoid that Common Mistake. Do not list all of those values in single parameter!
While this approach seems natural, this test design strategy would have two major disadvantages. The first problem is That every generated test scenario using this test design approach would have one and only one pizza topping. The system in its current design would only have test scenarios for pizzas with pepperoni OR sausage OR bell peppers. In reality, a user might actually order a pizza with pepperoni AND sausage AND bell peppers.
The second problem with this test design strategy is that the number of 2-way tests generated would be unnecessarily large. This is because each of the 24 toppings would need to appear together with every other value in the model. This would result in at least 98 test scenarios. There would be 24 test scenarios involving small-sized pizzas (one with each pizza topping), 24 more tests with medium-sized single-topping pizzas, 24 more tests with large-sized single-topping pizzas, and 24 more tests with extra large single-topping pizzas.
Therefore, we need to account for pizzas that contain more than one topping in our tests. We’d also like to reduce the number of tests in our 2-way test set. How do we accomplish these goals? While there are multiple strategies you could employ to ensure that pizzas with multiple toppings to appear in your tests, let’s start with the most basic and widely-applicable solution.
We want to start by including each of the available toppings as parameters. This will ensure that each and every topping can appear on a single pizza together. But here’s where it gets a bit tricky.
If we only include values of “Yes” and “No” for each of the toppings, the pizza ordered in each test will end up with an average of 12 toppings. As there are very few real world scenarios where a user would actually order a pizza with 12 toppings, we do not want so many of our tests to include this many toppings. But don’t worry, this can be fixed with a simple weighting strategy inside each parameter.
Rather than having one “Yes” and one “No” as values for each pizza topping parameter, we can force the tests to include less toppings by including a larger number of “No” values than “Yes” values. Let’s say we want an average of about 6 pizza toppings per pizza. We can accomplish this by weighting the values at a 3:1 ratio of “No” to “Yes”. In order to provide the tester with easy-to-follow instructions, let’s change “Yes” to “add {Parameter Name}” and “No” to “do not add {Parameter Name}”.
The resulting parameters & values look like this:
Multiple pizza toppings can appear on the same pizza.
The parameters are weighted so that a relatively realistic number of toppings (an average of about 3) appear in each scenario.
It only takes 26 tests to test every 2-way interaction in the system and achieve full pairwise coverage. Obviously we would want to include parameters like Size, Crust Type, and Sauce Type in the final version of this model, but these additions would only minimally, if at all, increase the number of tests.
If you accept the 3:1 weighting of the parameters, great! Execute your tests and be merry! But if you are trying to ensure that each test contains an even more realistic average number of toppings and are comfortable with increasing the quantity of tests, you can further adjust the weighting ratio.
Increasing the ratio to 7:1 of “No” to “Yes” increases the number of tests from 26 to 91, but leads to an average of less than 3 toppings per pizza. You can continue to adjust the ratio of “No” to “Yes” to fit your exact needs.
In a similar vein of thinking, you can adjust the weighting of each individual topping based on the frequency it will actually appear on a pizza. Obviously customers will choose Pepperoni as a topping more than they would order Hot Sauce, so it might makes sense to weight these toppings differently. For instance, you could apply a 2:1 ratio of “No” to “Yes” for Pepperoni, while increasing the ratio to 4:1 of “No” to “Yes” for Hot Sauce. This would result in the most repeatedly ordered toppings appearing more frequently in your tests without drastically increasing the total number of tests.
If having an average of 6 “Select All That apply” items per test scenario seems excessive, this test strategy may be preferable.
For this example, let’s say we want to model a maximum of 4 pizza toppings per pizza. Your parameters should be “Pizza Topping 1,” “Pizza Topping 2,” “Pizza Topping 3,” and “Pizza Topping 4,” as you might expect. Where this gets interesting is deciding on the parameter values.
You could simply list out all the pizza toppings for each parameter and be done, but more than likely there is a greater chance of the 1st pizza topping being added to a pizza than a 4th topping. We want to account for this difference in our test model. We can accomplish this by weighting the parameters to fit our exact needs, as shown below.
By adding increasing numbers of “No ____ Topping” as values, we can ensure that a 4th topping appears less than a 3rd topping, a 3rd topping less than a 2nd topping, and a 2nd topping less than the first topping. We can also adjust the ratio of “No Topping” to “Topping” based on the frequency with which we want multiple toppings to appear in our tests.
This may seem like the perfect strategy, but there is one problem: using this strategy will result in a very large number of tests unless we incorporate one or more of the test-reduction-focused test design strategies described below. Even just the 4 parameters in the above example would require 116 scenarios in order to achieve 100 percent pairwise coverage.
One way to avoid this problem is to use mixed-strength test coverage, utilizing 1-way coverage for each of the pizza toppings while keeping the 2-way coverage for any additional parameters. Mixed-strength test models allow you to select different levels of coverage for different parameters. This allows you to control the balance between increased test thoroughness with the workload created by additional tests. If we simply adjust the coverage level of each of the 4 pizza toppings to 1-way coverage while keeping the rest of the parameters at 2-way coverage, we are left with only 12 scenarios.
While there are risks to using mixed-strength test coverage, this is an optimal test design concept to ensure that every pizza topping is tested at least once in this “maximum number of selections” model.
As a way of keeping the number of tests you generate small, the Value Expansion feature often works very well with the “Modeling a Maximum Number of Selections” strategy. When using this strategy, Value Expansions are often a strong alternative to the strategy described immediately above of including the 1-way Mixed-Strength approach.
Particularly when your underlying business rules treat certain values (or in this situation, toppings) in the same way, you can assemble those values into groups. For instance, if the pizza application makes no internal distinction between pepperoni, ham, and bacon and only cares that each of those toppings are meats, you can easily account for this inside your DesignWise model. To do so, create a parameter value called “Meat” and include each type of meat inside a value expansion for that value.
An example of a test model following this pattern is shown below. It generates only 44 tests to achieve 100% 2-way coverage, significantly limiting the total number of tests without sacrificing test coverage.
Please note that any time you use the Value Expansion feature, there will not be a guarantee that pairwise coverage will be achieved for all of the items in your Value Expansion lists.
The “Select All That Apply” pattern occurs frequently. Knowing how to weight various tradeoffs involved is important. Having a solid understanding of the pros and cons of different test design approaches that you can use in this situation will allow you to make healthy decisions. This article has referred to 6 strategies, and suggested that 3 are usually bad and 3 are usually worth considering.