A common misperception of testing is that it only consists of running tests, i.e., executing the software and checking the results. Software testing is a process which includes many different activities; test execution (including checking of results) is only one of these activities. The test process also includes activities such as test planning, analyzing, designing, and implementing tests, reporting test progress and results, and evaluating the quality of a test object.
Typical Objectives of Testing
For any given project, the objectives of testing may include:
To prevent defects by evaluate work products such as requirements, user stories, design, and code
To verify whether all specified requirements have been fulfilled
To check whether the test object is complete and validate if it works as the users and other stakeholders expect
To build confidence in the level of quality of the test object
To find defects and failures thus reduce the level of risk of inadequate software quality
To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object
To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards
The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model. These differences may include, for example:
In component testing, the objectives may include finding failures to identify and fix defects early, as well as increasing code coverage.
In acceptance testing, the objectives may include confirming that the system works as expected and satisfies requirements, and providing stakeholders with information about the risk of releasing the system.
Why is Testing Necessary?
Involving testers in requirements reviews or user story refinement helps detect defects in these work products, reducing the risk of developing incorrect or untestable features. Working closely with system designers during the design phase increases understanding and reduces the risk of fundamental design defects. Similarly, working closely with developers during code development reduces the risk of defects in the code and tests. Verifying and validating the software prior to release helps detect failures and supports the process of removing defects, increasing the likelihood that the software meets stakeholder needs and satisfies requirements.
The relationship between Quality Assurance and Testing
Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved.
Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality.
Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing
Distinguish between error, defect, and failure
A person can make an error (mistake), which can lead to the introduction of a defect (fault or bug) in the software code
An error that leads to the introduction of a defect in one work product can trigger an error that leads to the introduction of a defect
If a defect in the code is executed, this may cause a failure
Error>Defect (good or bad)>Failure
Test Activities and Tasks
A test process consists of the following main groups of activities:
Test planning
Test monitoring and control
Test analysis (RCA) Root-Cause-Analysis
Test design
Test implementation
Test execution
Test completion
Testing throughout the Software Development Lifecycle
Software Development and Software Testing
For every development activity, there is a corresponding test activity
Each test level has test objectives specific to that level
Test analysis and design for a given test level begin during the corresponding development activity
Testers participate in discussions to define and refine requirements and design, and are involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as drafts are available No matter which software development lifecycle model is chosen, test activities should start in the early stages of the lifecycle, adhering to the testing principle of early testing.
The selection and adaptation of software development lifecycle models should be based on the project and product characteristics. Factors such as project goals, product type, business priorities, and identified risks should be considered. For instance, the development and testing of a minor internal administrative system would differ from a safety-critical system like an automobile's brake control system. Organizational and cultural issues can also impact communication between team members and hinder iterative development. In some cases, combining or reorganizing test levels and activities may be necessary. For example, when integrating a commercial off-the-shelf software product into a larger system, interoperability testing may be performed at the system integration test level by the purchaser.
Planning: Gathering business requirements from the client or stakeholders.
Requirement gathering and analysis: Converting the information from the planning phase into clear requirements for the development team.
Designing: Creating the architecture and design of the software product.
Building and developing: Writing the code and implementing the functionality of the software product.
Testing: Verifying the quality and performance of the software product.
Implementation: Delivering the software product to the client or stakeholders.
Deployment: Installing the software product on the target environment.
Maintenance: Providing support and updates for the software product.
Maintenance Testing
Once deployed to production environments, software and systems need to be maintained. Changes of various sorts are almost inevitable in delivered software and systems, either to fix defects discovered in operational use, to add new functionality, or to delete or alter already-delivered functionality. Maintenance is also needed to preserve or improve non-functional quality characteristics of the component or system over its lifetime, especially performance efficiency, compatibility, reliability, security, and portability.
Static Testing
Static testing techniques provide a variety of benefits. When applied early in the software development lifecycle, static testing enables the early detection of defects before dynamic testing is performed (e.g., in requirements or design specifications reviews, backlog refinement, etc.). Defects found early are often much cheaper to remove than defects found later in the lifecycle, especially compared to defects found after the software is deployed and in active use. Using static testing techniques to find defects and then fixing those defects promptly is almost always much cheaper for the organization than using dynamic testing to find defects in the test object and then fixing them, especially when considering the additional costs associated with updating other work products and performing confirmation and regression testing.
Additional benefits of static testing may include:
Detecting and correcting defects more efficiently, and prior to dynamic test execution
Identifying defects which are not easily found by dynamic testing
Preventing defects in design or coding by uncovering inconsistencies, ambiguities, contradictions, omissions, inaccuracies, and redundancies in requirements
Increasing development productivity (e.g., due to improved design, more maintainable code)
Reducing development cost and time
Reducing testing cost and time
Reducing total cost of quality over the software’s lifetime, due to fewer failures later in the lifecycle or after delivery into operation
Improving communication between team members in the course of participating in reviews
Test techniques
Categories of Test Techniques
Component or system complexity
Regulatory standards
Customer or contractual requirements
Risk levels and types
Available documentation
Tester knowledge and skills
Available tools
Time and budget
Software development lifecycle model
The types of defects expected in the component or system
Black-box Test Techniques
White-box Test Techniques
White-box testing is based on the internal structure of the test object. White-box test techniques can be used at all test levels, but the two code-related techniques discussed in this section are most commonly used at the component test level.
Experience-based Test Techniques
Experience-based test techniques involve deriving test cases from the tester's skill, intuition, and experience with similar applications and technologies. These techniques can help identify tests that may not be easily identified by other systematic techniques. However, the coverage and effectiveness of these techniques can vary depending on the tester's approach and experience. Assessing coverage can be challenging and may not be measurable with these techniques.
Error Guessing
How the application has worked in the past
What kind of errors tend to be made
Failures that have occurred in other applications
Exploratory Testing - informal (not pre-defined) tests are designed, executed, logged, and evaluated dynamically during test execution. The test results are used to learn more about the component or system, and to create tests for the areas that may need more testing.
Checklist-based Testing - Checklists can be created to support various test types, including functional and non-functional testing. In the absence of detailed test cases, checklist-based testing can provide guidelines and a degree of consistency.
Types of Testing
Test Management
Purpose and Content of a Test Plan
Determining the scope, objectives, and risks of testing
Defining the overall approach of testing
Integrating and coordinating the test activities into the software lifecycle activities
Making decisions about what to test, the people and other resources required to perform the various test activities, and how test activities will be carried out
Scheduling of test analysis, design, implementation, execution, and evaluation activities, either on particular dates (e.g., in sequential development) or in the context of each iteration (e.g., in iterative development) • Selecting metrics for test monitoring and control
Budgeting for the test activities
Determining the level of detail and structure for test documentation (e.g., by providing templates or example documents)
Risks and Testing
Risk involves the possibility of an event in the future which has negative consequences. The level of risk is determined by the likelihood of the event and the impact (the harm) from that event.
Product and Project Risks
Product risk refers to the possibility that a work product, such as a specification, component, system, or test, may fail to meet the legitimate needs of its users and stakeholders. When these risks are related to specific quality characteristics of a product, they are known as quality risks. Examples of product risks include:
Software might not perform its intended functions according to the specification
Software might not perform its intended functions according to user, customer, and/or stakeholder needs
A system architecture may not adequately support some non-functional requirement(s)
A particular computation may be performed incorrectly in some circumstances
A loop control structure may be coded incorrectly
Response-times may be inadequate for a high-performance transaction processing system
User experience (UX) feedback might not meet product expectations
Defect Management
A defect report filed during dynamic testing typically includes: