This website uses cookies. By using our website, you agree to the use of cookies.

banner blog

/

11.08.2018

Documenting a scripted test

In my last blog post I wrote about different types of documentation styles that are needed in testing depending on your test approach. One of the things I pointed out was that scripted testing might benefit from tool assistance even more than exploratory testing. This is of course not limited to documentation, but that’s what I want to focus on in this post. How can TestBench assist your scripted testing? Well, it certainly can’t script your tests for you (yet?), but it can help you.

 

When going for scripted testing you often want to write down a thing or two in advance. For this matter I want to look at

  • Test environments
  • Preconditions
  • The test itself
  • Traceability
  • Test execution results

 

For the nitpickers among you: as much as some people want results to be set in advance, I will do this after test execution of course.

 

Test environment

Let’s assume that I want to test a website on a Windows 10 PC with the most popular browser. For Germany that would be Firefox. You could write it in the test or user story. What I found more suitable is using the precondition for this as well:

 

Preconditions

 

You don’t have to retype everything every time, but after the first three letters TestBench offers you the options you have chosen in the past that started out with those three letters.

 

Preconditions

When you have things that need to be fulfilled before the actual test, it usually ends up as some kind of precondition. Being logged in in a certain role with the accompanying rights is something that comes to mind. As I already used the precondition for the test environment you, you probably have guessed that it is available for that as well.

 

The test itself

Now it comes down to the core: the test itself. TestBench comes with a predefined sequence for a test that looks like this:

 

Testsequence

 

It looks kind of rigid, but it also gives you some guidance as how to structure your test. That might be especially helpful if the ones designing the tests and executing the tests are different persons. Now what I really like is the preparation and navigation part as those are often part of the test steps, even though they are not part of what you want to test, but of your path to the test. Something that I do regularly is to test a browser zoom 300%, so that’s something that I would put down there for sure from time to time.

You don’t have to use all parts of the sequence and there is even a leave blank toggle. You might find that surprising at first, but once you toggle it, you can see the progress bar change, so you actually get visual feedback on how much you have documented. Oh, and it helps of course with the usual Monday morning question of “Did I leave that blank on purpose on Friday or did I just forget about it?” Another feature that aims at that goal is the internal message that tells which relevant parts have not been filled or toggled:

 

Open Activities

 

Traceability

Traceability is something that people often want. It is also something that tools are good at providing. And it is also something that (other) people often forget or don’t do. So maybe tools aren’t that good at providing it anyway. Well, that would be too easy an answer. Sometimes linking tests to stories is simply forgotten, sometimes it is complicated sometimes (excuse the blasphemy) it is simply not possible. TestBench takes this into account by giving you two options: You either create a test as a child element of a story or you deliberately create it as a free test case. There is not much room for confusing these two, so it pretty much boils down to your decision. From there on, let the tool guide you.

 

Test execution results

Welcome to binary world! Well, almost at least. If you use test cases there is at least some chance that you want to go beyond providing information and make some kind of recommendation concerning what you have just tested. Fail means that I would not recommend going live with what I just saw, pass means that you think it is ready. That is something you want to document. Like most (probably all) test management tools TestBench lets you set that pass/fail mark. And it is also inherited to the user story and the epic so that you can see those results on different aggregation levels:

 

Passed Failed

General test reporting is of course available as well, so that comes in handy.

 

To sum it up, TestBench offers quite some assistance in documenting scripting tests, so at least for me expectations have been met. Is that true for you, too? Well, that is up to you, so go and have a look yourself.

 

Christian Kram for TestBench Cloud Services