Welcome to the new Ranorex Support Portal

Sample Test Cases – Insurance – Get Quote

Learn how an optimized set of scenarios for an insurance application is efficiently generated in DesignWise.

What are our testing objectives?

Each time someone applies for insurance, thirteen different test conditions are included in the test scenario. Even in this over-simplified example, there are over 300,000 possible scenarios. In this context, we want to test this quote process relatively thoroughly – with a manageable number of tests.

We know that testing each item in our system once is not sufficient; we know that interactions between the different things in our system (such as a particular credit rating range interacting with a specific type of property, for example) could well cause problems. Similarly, we know that the written requirements document will be incomplete and will not identify all of those potentially troublesome interactions for us. As thoughtful test designers, we want to be smart and systematic about testing for potential problems caused by interactions without going off the deep end and trying to test every possible combination.

DesignWise makes it quick and simple for us to select an appropriate set of tests whatever time pressure might exist on the project or whatever testing thoroughness requirements we might have. DesignWise-generated tests automatically maximize variation, maximize testing thoroughness, and minimize wasteful repetition.

What interesting DesignWise features are highlighted in this sample model description?

This sample write up includes descriptions of the following features:

Bound Pairs – How to prevent certain combinations from appearing in your set of tests, faster

Forced Interactions – How to force certain high priority scenarios to appear in your set of tests.

Auto-Scripting – How to save time by generating detailed test scripts in the precise format you require (semi)-automatically.

Coverage Graphs – How to get fact-based insights into “how much testing is enough?”.

Matrix Charts – How to tell which exact coverage gaps would exist in our testing gif we were to stop executing tests at any point in time before the final DesignWise-generated test.

Using DesignWise’s “Coverage Dial” – How to generate sets of thorough 2-way tests and/or extremely thorough 3-way tests in seconds

What interesting test design considerations are raised in this particular sample?

We might have multiple values that are invalid with one particular value. If that is the case, marking many, many invalid pairs can be time consuming (and frustrating). DesignWise has a shortcut with its Bound Pair feature.

In this model, we might have a scenario where the applicant does not have a spouse. If that is the case, we want to ensure that the scenario does not include an age for a spouse that does not exist. Based on discussions about what age ranges would be necessary for testing, we have 3 particular ranges for spouses. In order to have scenarios that do not include a spouse, we add a fourth value, ‘N/A’ to the Age of Spouse parameter. Now, we could invalidate the three options of ages with not adding a spouse, or we could use the Bound Pair feature.

For this case, we create a Bi-Directional Bound Pair between ‘Add Spouse? – Do Not Click on Add Spouse’ and ‘Age of Spouse – N/A.’ A Bi-Directional Bound Pair ensures that every test that has ‘Add Spouse? – Do Not Click on Add Spouse’ will always be paired with ‘Age of Spouse – N/A’ and every test that has ‘Age of Spouse – N/A’ will be paired with ‘Add Spouse? – Do Not Click on Add Spouse.’

You will also note that when there is a scenario of ‘0’ children added in the insurance quotation, we have should not have average ages of the children present in our scenario. So, similarly, we marry the values of ‘How Many Children – 0’ with ‘Average Age of Children – N/A.’

It is often useful to start by identifying a verb and a noun for the scope of our tests

Consider additional variation ideas to ask about our verb and noun using “newspaper questions” – who, what, when, why, where, how, how many?

Designing powerful software tests requires that people to think carefully about potential inputs into the system being tested and how they might impact the behavior of the system. We strongly encourage test designers to start with a verb an a noun to frame a sensible scope for a set of tests and then ask the “newspaper reporter” questions of who?, what? when? where? why? how? and how many?


Who is asking for the insurance quote and what characteristics do they have that will impact how the System Under Test behaves? In particular…

  • Regarding the applicant’s age, how old are they?
  • Regarding the applicant, what gender are they? Do they have to tell us?
  • Does having a family matter?

Who Else

  • Does the applicant have a spouse?
  • How old is the spouse, if they have one?
  • Do they have any children?
  • How old are the children, if they have any?


  • Regarding the applicant’s location, where are they located?
  • Regarding the applicant’s location, what constitues a valid location?
  • Where are they requesting the quote? (e.g., online, on the phone, in person, etc.)

What kind

  • Do they want dental coverage?
  • Do they want maternity coverage?


  • What time of day do they get the quote?

Variation Ideas entered into DesignWise’s Parameters screen

Asking the newspaper questions described above is useful to understand potential ways the system under test might behave.

Once we have decided which test conditions are important enough to include in this model (and excluded things – like “What time of day do they get the quote?” in this example – that will not impact how the system being tested operates), DesignWise makes it quick and easy to systematically create powerful scenarios that will allow us to maximize our test execution efficiency.

Once we enter our parameters into DesignWise, we simply click on the “Scenarios” link in the left navigation pane.

DesignWise helps us identify a set of high priority scenarios within seconds

The coverage achieved in the 24 tests above is known as pairwise testing coverage (or 2-way interaction coverage). DesignWise-generated pairwise tests have been proven in many contexts and types of testing to deliver large thoroughness and efficiency benefits compared to sets of hand-selected scenarios.

DesignWise gives test designers control over how thorough they want their testing coverage to be. As in this case, DesignWise allows testers to quickly generate dozens, hundreds, or thousands of tests using DesignWise’s “coverage dial.” If you have very little time for test execution, you would find those 24 pairwise tests to be dramatically more thorough than a similar number of tests you might select by hand. If you had a lot more time for testing, you could quickly generate a set of even more thorough 3-way tests (as shown in the screen shot immediately below).

Selecting “3-way interactions” generates a longer set of tests which cover every single possible “triplet” of Values

DesignWise generates and displays this extremely thorough set of 90 three-way tests to you within a few seconds. This set of 3-way coverage strength tests would be dramatically more thorough than typical sets of manually selected test scenarios typically used by large global firms when they test their systems.

The only defects that could sneak by this set of tests would be these two kinds:

  • 1st type – Defects that were triggered by things not included in your test inputs at all (e.g., if special business rules should be applied to an applicant living in Syria, that business rule would not be tested because that test input was never included in the test model at all). This risk is always present every time you design software tests, whether or not you use DesignWise.

This risk is, in our experience much larger than the second type of risk:

  • 2nd type – Extraordinarily unusual defects that would be triggered if and only if 4 or more specific test conditions all appeared together in the same scenario. E.g., if the only way a defect occurred was if an applicant (i) is 65 years old and (ii) has a Spouse, lived in (iii) a Rural zip code, and wanted (iv) Maternity coverage. It is extremely rare for defects to require 4 or more specific test inputs to appear together. Many testers test software for years without seeing such a defect.

If a tester spent a few days trying to select tests by hand that achieved 100% coverage of every single possible “triplet” of Values (such as, e.g., (i) 18 year old applicant, and (ii) Without a Spouse, and (iii) has more than 6 children), the following results would probably occur:

  • It would take far longer for a tester to attempt to select a similarly thorough set of tests and the tester would accidentally leave many, many coverage gaps.
  • The tester trying to select tests by hand to match this extremely high “all triples” thoroughness level would create far more than 90 tests (which is the optimized solution, shown above).
  • Almost certainly, if the tester tried to achieve this coverage goal in 100 or fewer tests, there would be many, many gaps in coverage (e.g., 3-way combinations of Values that the tester accidentally forgot to include)
  • Finally, unlike the DesignWise-generated tests which systematically minimize wasteful repetition, many of the tester’s hand-selected scenarios would probably be highly repetitive from one test to the next; that wasteful repetition would result in lots of wasted effort in the test execution phase.
We can force specific scenarios to appear in tests
We easily forced a few high priority scenarios to appear by using DesignWise’s “Forced Interactions” feature.

You’ll notice from the screen shots of 2-way tests and 3-way tests shown above that some of the Values in both sets of tests are bolded. Those bolded Values are the ones we “forced” DesignWise to include by using this feature.

Auto-scripting allows you to turn test data tables (from the “Scenarios” screen) into detailed scripts

The Auto-scripting feature saves testers a lot of time by partially automating the process of documenting detailed, stepped-out test scripts.

We document a single test script in detail from the beginning to end. As we do so, we indicate where our variables (such as, “Age of Primary Applicant,” and “County,” and “Gender”) are in each sentence. That’s it. As soon as we document a single test in this way, we’re ready to export every one of our tests.

From there, DesignWise automatically modifies the single template test script we create and inserts the appropriate Values into every test in your model (whether it has 10 tests or 1,000).

We can even add simple Expected Results to our detailed test scripts

If you describe Expected Results like the one above on the “Manual Auto-Scripts” screen, DesignWise will automatically add Expected Results into every applicable test step in every applicable test in your model. As we entered this Expected Result, every test in this model will show this Expected Result after Test Step 14.

It is possible to create simple rules using the drop down menu that will determine when a given Expected Result should appear. To do so, we would use the drop down menus in this feature to create simple rules such as “When ____ is ___ and when ____ is not ____, then the Expected Result would be_____.”

This Expected Results feature makes it easy to maintain test sets over time because rules-based Expected Results will automatically update and adjust as test sets get changed over time.

Coverage charts allow teams to make fact-based decisions about “how much testing is enough?”

After executing the first 9 tests of this model’s 2-way set of tests, 77.5% of all possible “pairs” of Values that exist within the system will have been tested together. After all 24 tests, every possible “pair” of Values in the system will have been tested together (100% coverage).

This chart, and the additional charts shown below, provide teams with insights about “how much testing is enough?” And they clearly show that the amount of learning / amount of coverage that would be gained from executing the tests at the beginning of test sets is much higher than the the learning and coverage gained by executing those tests toward the end of the test set. This type of “diminishing marginal return” is very often the case with scientifically optimized test sets such as these.

DesignWise tests are always ordered to maximize the testing coverage achieved in however much time there is available to test. Testers should generally execute the tests in the order that they are listed in DesignWise; doing this allows testers to stop testing after any test with the confidence that they have covered as much as possible in the time allowed.

We know we would achieve 77.5% coverage of the pairs in the system if we stopped testing after test number 9, but which specific coverage gaps would exist at that point? See the matrix chart below for that information.

The matrix coverage chart tells us exactly which coverage gaps would exist if we stopped executing test before the end of the test set

The matrix chart above shows every specific pair of values that would not yet tested together if we were to stop testing after test number 10.

For example, in the first 10 tests, there is no scenario that includes both (a) “Age of Primary Applicant – 40 – 64.9” together with (b) “Gender – Male.”

You may notice that there are several black boxes in the Matrix Chart above. Those black boxes represent the invalid pairs created via the Invalid Pairs or Married Pairs features. We mark them as black to indicate that DesignWise will not pair them together, whereas the red boxes are those pairs that have yet to be covered.

We can analyze coverage on the extremely thorough set of 3-way tests we created also.

After executing the first 30 tests of this model’s 3-way set, 80.4% of all possible “triplets” of Values that exist within the system will have been tested together. After all 90 scenarios, every possible “triplet” of Values in the system will have been tested together (100% coverage).

Mind maps can be exported from this DesignWise model to facilitate stakeholder discussions.

DesignWise supports exporting in several different formats. Mind maps can be a great option if a tester wants to get quick, actionable guidance from stakeholders about which variation ideas should (or should not) be included. Mind maps quickly demonstrates to stakeholders that the test designers have thought about the testing objectives clearly and they give stakeholders an opportunity to provide useful feedback more quickly as compared to having stakeholders read through long documents filled with test scripts.

Detailed test scripts (complete with stepped-out tester instructions and rule-generated Expected Results) can be exported also:

The detailed test scripts shown above were created using DesignWise’s Auto-Scripts feature.

Other possible export formats could include test data tables in either CSV or Excel format or even Gherkin-style formatting.