Learn how to easily create detailed, easy-to-read execution instructions with conditional expected results for your manual testing efforts.
With DesignWise’s Auto-Scripts feature, you can quickly transform sets of optimized test CONDITIONS like these…
… into customizable SCRIPTS that contain detailed tester instructions.
You can even add automatically-generated Expected Results to your steps if you want to.
Remember Mad Libs?
Creating Auto-scripts in DesignWise is similar to that. Instead of adding adjectives and nouns into pre-formed sentences, however, you’ll be more like the author of the Mad Libs sentences themselves. You need to:
-
Create sentences containing execution instructions that will be common to most of the test scripts and
-
Identify “spaces” to indicate where DesignWise should “fill in the blanks” you’ve left with test conditions appropriate to each scenario.
First, navigate to the Scripts -> Manual Auto-Scripts screen and (optionally) add instructions to be completed before test execution on ALL these scenarios begins.
-
Navigate to the Scripts -> Manual Auto-Scripts screen.
-
Optionally in the “Start” section at the top of the screen, enter instructions that should be completed before any of the tests are executed. The instructions that you enter into the “Start” box will only appear once at the beginning; they will not repeatedly appear before each and every scenario.
-
Save your starting instructions.
Next, click on the “pencil” icon to enter instructions for your first test step. Alternatively, you may click the text already present for the step.
Next, enter detailed instructions for a tester for each step. For now, type Mad Libs-like sentences, as shown below with blank lines to indicate where Values are to be inserted.
zoomed in version of the step text
On the “Flight Details” screen, at the top, select __ for the class.
Then, enter the destination as __ and the country you are flying from as __.
As shown above, for example, you will want to type the words that will remain the same from test to test and leave four blanks (one for each place that Values will change from test to test):
-
One blank for the type of flight,
-
One blank for when the outbound flight leaves,
-
One blank for the destination country, and
-
One blank for whether or not there would be a Saturday-night stayover.
Next, replace those blank lines with the appropriate Parameter names.
-
Highlight the first blank line
Confirm that the Parameter Name to be inserted is in the Parameter Name drop down list (adjust if necessary) -
Press “ctl-y” on the keyboard (or the “Insert” link if you prefer, but the cool kids do the ctl-y shortcut)
-
Press “ctl-enter” on the keyboard (or the “Add/Save” link, but again cool kids use appropriate shortcuts)
-
Rinse and repeat for the other blank lines
You’ll notice when you’re entering and editing your Auto-Scripts that your sentences probably look strange.
The words inside the { curvy brackets } are Parameter Names. When you’re editing the Auto-Script, your sentences won’t look like “normal” sentences. The trick is to think about what your steps will look like when the names of the actual Values will be inserted into each sentence.
As soon as you save each step, sanity prevails. You will now see the “normal” sentences you’ve constructed (with Coach and the Philippines as the examples below).
Don’t forget to save each step before you add your next one! Thankfully, DesignWise notifies you under the last edited step that there are unsaved edits, .
The words that are the same between tests are in normal text. Words that change from test to test (the Values you entered on the Parameters screen) are shown in bold.
Click on different test cases at the bottom half of your screen (preview section that mirrors Scenarios screen) to see how your script steps will change.
Finally, in the “Finish” section you may want to add some instructions that will appear only once at the end of all of the scenario scripts.
Incorporating “Parameterized expected results” into your models
In the tests shown above for example, we might want to include this Expected Result every time the necessary Values appear together in a test case:
When a customer attempts to use frequent flier miles to pay for a flight and that customer has enough miles to do so, confirm that the transaction is successfully processed and that the customer’s frequent flier balance is debited.
In the Scripts -> Manual Auto-Scripts screen, find the specific test step you want to add your Expected Result to and highlight it
-
Navigate to the Manual Auto-Scripts screen.
-
Hover over the Step that you want the Expected Result to appear with.
-
Click on “Add Expected Results.”
You’re setting up a simple “when / then” rule here.
Note that you’re not restricted to rules like when “Type of Payment” IS frequent flier points.
In this example, you could also create a rule that reads “IS NOT” “frequent flier points.” (click on “is” between dropdowns to switch the rule type)
If WHEN selection is blank, the expected result from THEN field will apply to that step across all test cases (i.e. regardless of the data combinations).
Lastly, you can put parameter names with { } syntax inside the THEN statement – this is typically valuable in the validation use cases where the step would say “Enter X as {X}” and the expected result would say “Validate the X is shown as {X}”. Parameter names do not have to match, in case you have already included actual expected results on the Parameters screen. In such cases WHEN conditions are often left blank.
Finish creating your simple “When / Then” rule and save it.
That’s it! All your tests will now include the Expected Results you defined where applicable.
Important Usage Tips and things to know about the Expected Results feature
-
This feature is a partial solution for straightforward Expected Results. It primarily exists so that you won’t have to manually type many, simple expected results. It is not designed to handle especially complex rules that you might have.
-
Be sure you understand the similarities & differences between DesignWise’s Expected Results in the Auto-scripts screen, and Expected Outcomes in the “Forced Interactions” feature. There is a big, yet subtle, difference:
- Expected Result in Manual Auto-Scripts takes the scenario data table as “read only” precondition and generates the “Then” content ONLY IF the conditions are satisfied (i.e. “reactive approach”).
- Expected Outcome in Forced Interactions guarantees that the test conditions to satisfy it will be included in the Scenarios table at least once (i.e. “proactive approach”, which may cause the increase in the number of test cases).
- If you want to define an Expected Result that requires 3 or more specific Values to appear in a single test script (and you’re creating pairwise sets of tests), use the “Forced Interactions” feature or higher algorithm strength to guarantee the scenario is included in your suite. Then use the Manual Auto-Scripts feature to document the Expected Result for export.
DesignWise Automate can leverage that last column on Forced Interactions directly as an internal variable.
If you want to define an Expected Result that requires 2 or fewer specific Values to appear in a single test script (and you’re creating pairwise sets of tests), use the Manual Auto-Scripts feature without additional prep work.