As more and more teams move towards the Agile methodology in software development, we are getting an increased number of questions like:
Could DesignWise still be applied, given iterative nature of requirement generation and uncertainty about the final state of the features?
In this article, we would like to discuss related applicability concerns and dive deeper into the benefits DesignWise could deliver in the Agile environment throughout SDLC.
While the question posed above is reasonable, Agile vs Waterfall is not the best classification criterium for applying DesignWise. The same is true for dividing the apps into GUI, non-GUI, micro services, etc. – it also does not align well with the DesignWise strengths.
It is a test design optimization tool which focuses on the early stages of the testing process and then integrates with tools responsible for the subsequent steps, like Ranorex Studio. Speaking about the reduction of effort, the goal of applying DesignWise is to deal with such challenges of manual test creation as prolonged and error-prone scenario selection, gaps in test data coverage, tedious documentation, and excessive maintenance.
The methodology DesignWise facilitates is based on the research results about the causes of defects in production. Manually written test cases often represent a very fragmented view of the system, focusing on individual inputs while allowing redundancy or omissions in the remainder of the scenario. On the contrary, DesignWise provides complete control and traceability for each of the steps in the test case.
The tool can deliver significant benefits across project and application types, as long as their functional flows demonstrate sufficient interaction variety.
Thus, the key app characteristic is flow variations – in other words, the system overall should contain several decision points and at least 2 steps in the process with multiple options per each. Higher number of those creates numerous possible paths through the system, therefore DesignWise could be applied to identify the optimal set to test them.
You can refer to the diagram below for the high-level applicability decision tree.
Most often, 10+ expected test cases mean it is reasonable to use DesignWise for suite generation. Although the application type is not the decisive factor for DesignWise applicability, the table below illustrates “happy path” examples for each category.
The use cases discussed so far demonstrate the applicability only from the perspective of inputs. While that is often the primary factor, it is important to consider other elements:
The minimal models (2×2) make more sense if the script contains 10+ detailed steps with data-driven expected results. Manual copying of the script always has room for error.
Creating a model in DesignWise may be the fastest route to get the Xray, Gherkin, Java, etc. file with the test cases. Further, you guarantee the consistency of the export format across projects.
The current release may have only the minimal information, but if the application keeps growing, you may want to get ahead of the curve and start building the DesignWise model in advance.
An application with numerous parameters (e.g. >=8) and values may not be a good fit for DesignWise if the interactions between the elements are heavily limited, leaving only a few possible paths through the system.
Next, let us focus more on how applying DesignWise is different in the Agile environment. To have an anchor/benchmark, in the waterfall world, by the time you reach testing, most information is well-defined, so you can start moving through all the traditional DesignWise six steps sequentially.
On the contrary, the iterative nature of the Agile process can be illustrated in the following diagram (Source):
There, DesignWise usage flow should ideally start as early as Sprint Planning and requirement definition. SMEs can leverage domain knowledge and application access to more accurately specify the inputs for each requirement. Then the majority of the work is performed during Sprint Execution. Testers would pick up the draft models and expand them to the execution-ready state at the acceptable coverage level. Finally, data tables would be exported to Ranorex Studio or manual scripts would be exported to Xray or automated ones (in the BDD format) passed to the coders for adjustment based on the automation framework.
The key challenge is that not every user story may be applicable (from the number of inputs & variations perspective) and, consequently, not every sprint may have sufficient scope. It is quite common that DesignWise is used in every 2nd or 3rd sprint, once the testing model has enough elements to justify the combinatorial exploration. And of course, once it comes to regression for each release, DesignWise test models can be easily updated to accommodate the new functionalities.
If the two features are coming in different sprints, it would likely be necessary to wait until the second enhancement (if there are no other parameters known from the previous releases or from the application access/documentation).
It is also worth noting that sometimes a user story does not look applicable but thinking outside of boundaries will quickly change that assessment. For example, if a user story specifies the ability to log in correctly, it looks like a test with 2 parameters, 1 value each. However, if we look deeper, that requirement means a correct login for different allowed formats for usernames and passwords and an incorrect one for invalid options. Suddenly, thoroughly testing a simple user story can fully leverage DesignWise capabilities.
At this point, it is also important to keep in mind the general role of the tool in BDD practices (which often accompany the Agile transformation). When we consider common BDD goals, we can think of these 3 as being important:
From the communication standpoint, reviewing the testing efforts at the model level (in tabular or mind map format) and analyzing the coverage with the help of visualizations should help reach the mutual understanding faster.
From the requirements standpoint, Gherkin already facilitates more clear understanding of those. On top of that, DesignWise provides the platform for people with 3 knowledge components (technical, business, and functional) to come together and make sure ambiguities are eliminated and each requirement clearly specifies all the critical input values.
From the automaton standpoint, model-based testing allows easier maintenance and reusability while writing Give-When-Then scripts inside the tool helps less technical users to contribute more to the creation of automated tests.
Thus, the focus of your decision making should revolve more around the universal benefits of DesignWise and whether those could improve your status quo.