Task Descriptor: Run Tests
Execute the appropriate collections of tests required to evaluate product quality. Capture test results that facilitate ongoing assessment of the product.
Based on Method Task:  Run Tests
Relationships
RolesMain: Additional: Assisting:
InputsMandatory: Optional:
  • None
External:
  • None
Outputs
Steps
Schedule test execution
Your system test suite should be run as often as possible.  Ideally, you should run your tests whenever new code is checked into your version control tool.  For larger systems this will prove to be too costly, your test suite may take several hours to run, and therefore it needs to be scheduled less often.  If possible run the test suite several times a day, minimally run the suite each night but if possible also try to run it during working hours.
Execute test suite

Run the test suite at the scheduled time.  This should be automated.

Good practices:

  1. Run the test suite in a separate test environment.
  2. Ensure that you run the test suite against the latest clean build.
  3. The first step of the test suite should be to setup the test environment (e.g. ensure that the network is available, that the test database is available and reset to a known state, and so on).
Close test suite run

The last step of the test suite should be to close the run. To do this, you must:

  1. Close the test log(s).  The appropriate test log file(s) should be closed and if appropriate placed in the appropriate folder or directory.
  2. Announce results.  A notice should be sent out to any concerned person(s) informing them of the result of the test run and where the test log(s) have been placed.
Examine the Test Log

Collect and compile information obtained from execution logs to:

  • Capture the high impact and risk issues obtained from the execution of test
  • Identify error in test creation, data inputs, architectural anomolies and integrating applications
  • Isolated target of test to determine failure points
  • Diagnose failure symptoms and characteristics
  • Assess and identify possible solutions

Once these steps have been completed, verify that enough details have been completed to add value in determining the impact of the results.  In addition, make sure that enough information exists to assist individuals performing dependent tasks.

Identify failures and propose candidate solutions

The approach to testing will determine the identified failures and proposed candidate solutions.

As stated by Brian Marik

Tests that are programmer supporting are used to help prepare and ensure confidence in the code.  When identifying failures and proposing solutions for programmer supportings tests:

  • Failures will be identified at an object or element level
  • Solutions will be to help clarify the problem

Test that are business supporting are used to uncover prior mistakes and omissions.

  • Failures will identify omissions in requirements
  • Solutions will help to clarify expectations of the system

Once this information has been identified and proposed steps to resolve the failures can

Communicate test evaluation results

Communicating test results can impact reality and the perception of the effectiveness of the tests.  When communicating test results, it is important to:

  • Know the audiance so that appropriate information is communicated appropriately
  • Execute tests or scenarios that are likely to uncover the high impact and risk issue or represent actual usage of the system

When preparing results reports, be prepared to answer the following questions:

  • How many test cases exist and what their states (pass, fail, blocked, etc.)?
  • How many bug reports have been filed and what are their states (open, assigned, ready for testing, closed, deferred, etc.)?
  • What trends and patterns do you see in test case and bug report states, especially opened and closed bug reports and passed and failed test cases?
  • For those test cases blocked or skipped, why are they in this state?
  • Considering all test cases not yet run--and perhaps not yet even create--what key risks and areas of functionality remain untested?
  • For failed test cases, what are the associated bug reports?
  • For bug reports ready for confirmation testing, when can your team perform the test?
Evaluate and verify your results

Verify the appropriate information has been completed and resulting artifacts are of sufficient value.

Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper.  You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work.  Where possible, use the checklists provided in OpenUp Basic to verify that quality and completeness are "good enough".

Have the people performing the downstream activities that rely on your work as input taken part in reviewing your interim work?  Do this while you still have time available to take action to address their concerns.  You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently.  It might be useful to have the author  of the input artifact review your work on this basis.

Try to remember that the OpenUp Basic is an iterative process and that in many cases artifacts evolve over time.  As such, it is not usually necessary-and is often counterproductive-to fully form an artifact that will only be partially used or will not be used at all in immediately subsequent work.  This is because there is a high probability that the situation surrounding the artifact will change--and the assumptions made when the artifact was created proven incorrect--before the artifact is used, resulting in wasted effort and costly rework.  Also, avoid the trap of spending too many cycles on presentation to the detriment of content value.  In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.

Properties
Multiple Occurrences
Event-Driven
Ongoing
Optional
Planned
Repeatable