Task Descriptor: Evaluate Test Results
Evaluate the results of tests, reporting the actions needed for correting the problems found.
Based on Method Task:  Evaluate Test Results
Relationships
RolesMain: Additional: Assisting:
InputsMandatory: Optional:
  • None
External:
  • None
Outputs
Steps
Examine the Test Log

Collect and compile information obtained from execution logs to:

  • Capture the high impact and risk issues obtained from the execution of test
  • Identify error in test creation, data inputs, architectural anomolies and integrating applications
  • Isolated target of test to determine failure points
  • Diagnose failure symptoms and characteristics
  • Assess and identify possible solutions

Once these steps have been completed, verify that enough details have been completed to add value in determining the impact of the results.  In addition, make sure that enough information exists to assist individuals performing dependent tasks.

Identify failures and propose candidate solutions

The approach to testing will determine the identified failures and proposed candidate solutions.

As stated by Brian Marik

Tests that are programmer supporting are used to help prepare and ensure confidence in the code.  When identifying failures and proposing solutions for programmer supportings tests:

  • Failures will be identified at an object or element level
  • Solutions will be to help clarify the problem

Test that are business supporting are used to uncover prior mistakes and omissions.

  • Failures will identify omissions in requirements
  • Solutions will help to clarify expectations of the system

Once this information has been identified and proposed steps to resolve the failures can

Communicate test evaluation results

Communicating test results can impact reality and the perception of the effectiveness of the tests.  When communicating test results, it is important to:

  • Know the audiance so that appropriate information is communicated appropriately
  • Execute tests or scenarios that are likely to uncover the high impact and risk issue or represent actual usage of the system

When preparing results reports, be prepared to answer the following questions:

  • How many test cases exist and what their states (pass, fail, blocked, etc.)?
  • How many bug reports have been filed and what are their states (open, assigned, ready for testing, closed, deferred, etc.)?
  • What trends and patterns do you see in test case and bug report states, especially opened and closed bug reports and passed and failed test cases?
  • For those test cases blocked or skipped, why are they in this state?
  • Considering all test cases not yet run--and perhaps not yet even create--what key risks and areas of functionality remain untested?
  • For failed test cases, what are the associated bug reports?
  • For bug reports ready for confirmation testing, when can your team perform the test?
Evaluate and verify your results

Verify the appropriate information has been completed and resulting artifacts are of sufficient value.

Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper.  You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work.  Where possible, use the checklists provided in OpenUp Basic to verify that quality and completeness are "good enough".

Have the people performing the downstream activities that rely on your work as input taken part in reviewing your interim work?  Do this while you still have time available to take action to address their concerns.  You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently.  It might be useful to have the author  of the input artifact review your work on this basis.

Try to remember that the OpenUp Basic is an iterative process and that in many cases artifacts evolve over time.  As such, it is not usually necessary-and is often counterproductive-to fully form an artifact that will only be partially used or will not be used at all in immediately subsequent work.  This is because there is a high probability that the situation surrounding the artifact will change--and the assumptions made when the artifact was created proven incorrect--before the artifact is used, resulting in wasted effort and costly rework.  Also, avoid the trap of spending too many cycles on presentation to the detriment of content value.  In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.

Properties
Multiple Occurrences
Event-Driven
Ongoing
Optional
Planned
Repeatable