Welcome to the Ranorex Support Portal

How to perform system-to-system integration testing in DesignWise

Learn about optimizing coverage, traceability, and the integration scenario count in DesignWise, taking wealth management as an example.

  • Core Testing Challenge
  • DesignWise Solution & Modeling Process
    • Building a Model – Step 1 – Parameter & Value Selection
    • Building a Model – Step 2 – Generating Optimal Scenarios
    • Building a Model – Step 3 – Coverage Results Comparison
    • Building a Model – Step 4 – Scripting & Export
  • Summary & Case Studies

Core Testing Challenge

While functional testing is a common entry point for DesignWise implementations and we have multiple articles describing the benefits achieved, integration testing (both system-to-system and E2E) is often even more applicable because of the increase in scale and number of dependencies (and, consequently, in number of important possible interactions).

However, with that increase comes the greater difficulty in decision making which leads to the major challenge in testing complex systems – “how much testing is enough?”

In such situations, teams use DesignWise to quickly determine the optimal answers to both (1) “how many tests?” and (2) “which specific tests?”.

We will describe how these benefits come to life on the generalized example of a wealth management implementation where System A handles risk profile evaluation (like this), System B – opening an account (like this), and System C – placing the trade.

DesignWise Solution & Modeling Process

Building a Model – Step 1 – Parameter & Value Selection

Note: the general logic of “Parameter = a step in the flow that impacts system outcomes and can be represented with finite variation” still applies throughout the article, the notes below build on top of it.

The general rules for system-to-system parameter & value identification:

  1. prioritize parameters that affect more than 1 system;

  2. prioritize values heavily involved in business rule/integration triggers;

  3. prioritize value expansions for less impactful options.

In the financial implementations, each system is usually contributing a “fair share” of parameters to the DesignWise model, and the key characteristic is the model shape – more “horizontal”, with fewer factors but larger value lists (Claims Processing in Insurance is another area with similar model shapes, unlike the typically “long and narrow” policy side).

System-to-system modeling is not likely to hit the 200 parameter limit in DesignWise, so you have the flexibility of including factors that are less important, but are necessary for “passive regression” traceability and/or automation (and could increase your defect prevention).

To make sure you both prioritize boundary values and explore options beyond them, you can use the “bias” feature within the value expansion dialog:

The interactions between all the sub-elements of each system are often less of a focus, so sections of the model could have the “major outcome” parameters (e.g., User Profile, Account Type) which will help with the algorithm selections on the next modeling step described later.

If for any reason including factors other than the “major outcome” is not feasible/relevant, you can leverage the “nested” values where the name would describe all the contextual characteristics (instead of just saying “1”):

That way, any reviewer could clearly see, e.g., which combinations of qualifying questions you have covered and which you have intentionally omitted (pipe delimiters are not special/required).

For convenience, we suggest keeping the parameters in the flow order on the first screen.

Once the draft model is created, the auto-generated Mind Map presents an intuitive view of the model elements and can be easily shared for collaboration and approval.

This approach allows teams to communicate clearly and collaborate efficiently by confirming early in the testing process that 1) all critical aspects have been accounted for, and 2) they have been included at the right level of detail (which is one part of the DesignWise answer to “how much testing is enough?”).

Building a Model – Step 2 – Generating Optimal Scenarios

Note: all constraints are for illustrative purposes and do not reflect the actual logic of the sample applications linked in the opening.

The horizontal model shape we mentioned earlier increases the benefits delivered by a relatively new DesignWise feature – multi-value constraints.

They allow specifying the rules in a faster and more concise manner compared to the standard icon view (if you do not see the toggle in the top left of the Constraints screen, feel free to reach out to us and request beta access).

Note: business logic for parameters like User Profile often involves more than 2 factors, which may complicate the implementation of DesignWise constraints. You can refer to the “How to Implement N-way Constraints in DesignWise” article for available methods, and we are working on improving that functionality in the future.

The last point at this step is the algorithm thoroughness selection. System-to-system DesignWise models typically utilize 2-way or Mixed-strength (depending on the parameter structure, project criticality, etc.). The dropdown settings in Mixed-strength are generally chosen based on the following parameter logic:

  • Does it impact all systems and has numerous rules/dependencies associated with it? -> Include with at least 2-way coverage selection.
  • Does it impact all systems and has few/no rules/dependencies associated with it? -> Include with 2-way selection given short value lists + value expansions.
  • Does it impact only 1 system but has numerous rules/dependencies associated with it? -> Include with 1-way coverage selection and a fairly exhaustive list of values (because of the constraints).
  • Does it impact only 1 system and has few/no rules/dependencies associated with it? -> Likely should not have been included in the model, but 1-way otherwise. Any mandatory informational parameters would be in this category.

For this example, we will assume that User Profile (System A “outcome”), Account Type (System B “outcome”), and Order Type are critical for the release and deserve 3-way, the rest of the model stays at 2-way because we only included the important variables.

The resulting scenarios table could look like this:

The ability to iterate at this step and quickly regenerate test cases (based on, e.g., requirement updates) for review provides immense help clarifying ambiguities much earlier in the process.

The effective combination of the level of detail in Parameters and the business-relevant coverage strength in Scenarios guarantees that the DesignWise algorithm optimizes your total model scope to have a minimal number of tests that cover all the important interactions.

And next, we will discuss the last piece of the core testing “puzzle” – given the total scope, how we can use DesignWise visualizations to select the right stopping point.

Building a Model – Step 3 – Coverage Results Comparison

When integration scenarios are created by hand, they often represent a very fragmented view of the system and struggle with redundancy or omissions. Instead, DesignWise maximizes the interaction coverage in fewer scenarios and provides complete control and traceability for each of the steps in the test case.

If we now analyze the interaction coverage achieved and compare it with the typical manual solution, the results would often look like this:

As you can see, DesignWise-generated tests benefit from Intelligent Augmentation that ensures coverage of both (1) all specified requirements and (2) every critical system interaction. Our scenarios consistently find more defects than hand-selected test sets because interactions are a major source of system issues.

Taking this analysis a step further, given typical schedule deadlines, etc., we can identify the exact subset of the total scope that will be sufficient for the immediate testing goals and communicate that decision clearly to the management with the combination of the Mind Map + Coverage Matrix.

Building a Model – Step 4 – Scripting & Export

Note: some teams may choose to execute directly from the test cases table. They would leverage “Save As” dropdown on the Scenarios screen and skip this section.

We are observing more & more teams switching to BDD, so we will be covering DesignWise Automate in this article, but most general principles also apply to Manual Auto-Scripts.

First, the overall script structure is completely up to you. The number of steps, length of each, number of parameters per each, etc. depend on your guidelines for both test design and execution – DesignWise has the flexibility to support a wide range of preferences.

Second, for the review and export efficiency, we will be using multi-value {[]} filters to separate User Profile 1-3 and 4-6 scenarios (assuming they have different validation steps for example purposes).

Note: “!” sign means negation of the values in the filter. You can check the “Usage” button on the Automate screen or the “Getting Started with Automate” guide for more details about the syntax rules.

Lastly, we strongly recommend sharing the DesignWise models with automation engineers early in the process to allow them enough time to review and provide feedback on step wording, value names, etc.

Note: if present, only value expansions are exported in the script (the same is true for CSV), so you can have abstract/categorical value names and then provide execution-ready details in the expansion.

Once the script is finalized, you can proceed to export the scenarios in the format compatible with the test management tools and/or automation frameworks. E.g., without any extra actions, you can generate the CSV for Xray alongside Java files.

Tool-specific dialogs support metadata entry/selection that further smoothes the integration process.

This step enables accelerated, optimized automation because you can:

  • Rapidly create clear, consistent steps that leverage Behavior Driven Development principles.
  • Export 1 Scenario block into multiple scripts based on the references to the data table.
  • Improve collaboration across business and technical teams to understand the testing scope as a group.

Summary & Case Studies

In conclusion, the combination of DesignWise features will allow you not only to quickly generate the optimal set of scenarios but also to answer the “how much testing is enough?” question with clarity & confidence.

The image above should be familiar from our other educational materials, and hopefully, it underscores the notion that the process & methodology are not strongly dependent on the type of testing, type of system, industry, etc.

The goal of applying DesignWise is to deal with such challenges of manual test creation as prolonged and error-prone scenario selection, gaps in test data coverage, tedious documentation, and excessive maintenance.



Get the latest test automation information right in your mailbox.


© 2024 Ranorex GmbH. All Rights Reserved