An overview over the test progress and the reached quality is presented in all views. See open activities attached to the affected objects - directly and always up-to-date.
If any essential data is missing, an open activity is automatically created. TestBench Cloud Services shows the user not only the activities to be completed, but also their urgency and cumulates this information to the level of user stories and epics.
The object levels are displayed as follows:
Open activities are displayed with symbols, colours and numbers.
The open activities have three different urgencies:
blue note: activity was created less than seven days ago, low urgency
orange flash: activity was created less than 14 days ago, medium urgency
red flame: activity was created more than 14 days ago, high urgency
For more detailed information, look at the online help.
Each element also has its own progress statistics. These show the user the progress of the test specification, but also the fulfillment of the test preconditions, such as the presence of the necessary test data. Here too, the information is displayed cumulatively on the upper levels.
On the picture you can see the progress of a user story: the left bar shows the degree of completness of the test specification of the linked test cases, the right bar shows the degree of the fulfilled preconditions of the linked test cases.
To ensure that the current statuses of the test results can be read at any time, the test case statuses are also displayed at all levels. Thus, the relationship between a found defect and the associated user story is always visible.
The status can be read directly from the test cases. Also, which test case is ready for test execution (status "Ready" with an open activity "Execute test case"). At the level of the associated user story, these status are then cumulated and visualized for all test cases of this user story:
In addition to the defects found and those that have not yet been corrected, the product quality assured by tests is also displayed. The product quality is calculated from the number of test cases run with "passed" in relation to the coverage of the user stories with linked test cases.