V. Software Quality Assurance
5.1 Documentation
5.1.1 Test Cases
Refer to the following table for the test scenarios per Milestone (expand each group to reveal test cases):
List of Test Cases (Milestone-based)
5.1.2 Results of the Tests
For the results of the tests performed, see . 5.2 Software Reviews
5.2.1 Defined Metrics
Number of Tests: Tracks the total number of tests created and executed.
Test Coverage: Percentage of code covered by unit and integration tests.
Defect Density: Number of defects is reflected by the amount of failing test cases
Defect Resolution Time: Average time taken to resolve identified defects.
Customer-reported Defects: Number of defects reported by the end-users after release which is triaged by Hema
Test Execution Time: The total time taken to execute all test cases.
Requirement Traceability: Percentage of requirements with associated test cases and successful test execution.
5.2.2 Tools, Techniques, and Methodologies
Code Reviews:
Code Quality: Ensure adherence to coding standards, proper error handling, and maintainability.
Code Structure: Check for efficient algorithms, modular design, and reuse opportunities.
Documentation Review: Ensure comments and code documentation are sufficient and up-to-date.
Acceptance Tests:
Functional Completeness: Validate that the software meets all documented functional requirements.
User Interface (UI) Consistency: Ensure UI components match the design and provide consistent interaction patterns.
Usability Testing: Conduct tests to assess user satisfaction and ease of use.
Edge Case Handling: Verify the system’s behavior under rare or extreme conditions.
Design Reviews:
Architectural Integrity: Check that the system design aligns with architectural goals, including scalability and maintainability.
Module Interactions: Review module dependencies and interactions for efficiency and correctness.
5.3 Performance Testing
5.3.1 Defined Metrics
Load time - The amount of time it takes to load the .mmpk file Render time - Time it takes to render map feature layers Frame rate - Frames per second (FPS) while interacting with the application map (e.g., panning, zooming) Memory Usage - Amount of memory consumed when the .mmpk file is loaded CPU Usage - Percentage of CPU utilization during heavy operations such as waypoint navigation, toggling track assets, querying Signals for LoA selection Battery Consumption - How the application impacts the battery life 5.3.2 Tools, Techniques, and Methodologies
To track the performance of the application for each build, the following benchmarks are used:
Testing Application Functionalities such as map navigation, journey selection, map layer management, etc. Layer Feature Query Performance testing with LoA selection and evaluation of query execution time as well as memory and CPU consumption during these operations Analysis of memory usage and leaks during initial map layer render and track asset toggling Responsiveness of the Application during different states such as switching betwwen foreground and background tasks and the main Application Functionalities Battery consumption analysis during the above benchmarks Ensuring UI responsiveness and overall user experience fluidity while using the application offline Application testing on extended usage scenarios as well as possible edge-case scenarios that can be encountered in the field In order to track the defined performance metrics across all Application Performance Testing Benchmarks, is used. This built-in tool allows tracing of the application and identifies areas in which your app makes inefficient use of resources such as the CPU, memory, graphics, network, or the device battery. 5.2 Software Reviews
5.2.1 Defined Metrics
Number of Tests: Tracks the total number of tests created and executed.
Test Coverage: Percentage of code covered by unit and integration tests.
Defect Density: Number of defects is reflected by the amount of failing test cases
Number of Code Reviews: Tracks the total number of peer reviews conducted.
Length of Code Reviews: Average time spent on peer reviews per code module.
Defect Resolution Time: Average time taken to resolve identified defects.
Customer-reported Defects: Number of defects reported by the end-users after release which is triaged by Hema
Test Execution Time: The total time taken to execute all test cases.
Requirement Traceability: Percentage of requirements with associated test cases and successful test execution.
5.2.2 Tools, Techniques, and Methodologies
Code Reviews:
Code Quality: Ensure adherence to coding standards, proper error handling, and maintainability.
Code Structure: Check for efficient algorithms, modular design, and reuse opportunities.
Documentation Review: Ensure comments and code documentation are sufficient and up-to-date.
Acceptance Tests:
Functional Completeness: Validate that the software meets all documented functional requirements.
User Interface (UI) Consistency: Ensure UI components match the design and provide consistent interaction patterns.
Usability Testing: Conduct tests to assess user satisfaction and ease of use.
Edge Case Handling: Verify the system’s behavior under rare or extreme conditions.
Design Reviews:
Architectural Integrity: Check that the system design aligns with architectural goals, including scalability and maintainability.
Module Interactions: Review module dependencies and interactions for efficiency and correctness.
5.3 Performance Testing
5.3.1 Defined Metrics
Load time - The amount of time it takes to load the .mmpk file Render time - Time it takes to render map feature layers Frame rate - Frames per second (FPS) while interacting with the application map (e.g., panning, zooming) Memory Usage - Amount of memory consumed when the .mmpk file is loaded CPU Usage - Percentage of CPU utilization during heavy operations such as waypoint navigation, toggling track assets, querying Signals for LoA selection Battery Consumption - How the application impacts the battery life 5.3.2 Tools, Techniques, and Methodologies
To track the performance of the application for each build, the following benchmarks are used:
Testing Application Functionalities such as map navigation, journey selection, map layer management, etc. Layer Feature Query Performance testing with LoA selection and evaluation of query execution time as well as memory and CPU consumption during these operations Analysis of memory usage and leaks during initial map layer render and track asset toggling Responsiveness of the Application during different states such as switching betwwen foreground and background tasks and the main Application Functionalities Battery consumption analysis during the above benchmarks Ensuring UI responsiveness and overall user experience fluidity while using the application offline Application testing on extended usage scenarios as well as possible edge-case scenarios that can be encountered in the field In order to track the defined performance metrics across all Application Performance Testing Benchmarks, is used. This built-in tool allows tracing of the application and identifies areas in which your app makes inefficient use of resources such as the CPU, memory, graphics, network, or the device battery.
VI. Software Test Documentation
6.1 Overview
In this project, the team submits a list of test cases (ex. ) to the client as a baseline for milestone approval internally. Each significant testing is logged below in Test Log for tracking. An outstanding issue list table is maintained where issues are added after every testing, and their Issue IDs added in the Issues Found column of the Test Log. After the end of every testing, a Test Report entry is created summarizing the issues found and recommendations by the testers, if any. 6.2 Test Plan
6.2.1 Test Cases
Refer to the following for the Test Scenarios per Milestone:
6.2.3 Outstanding Issue List
Evaluations and Recommendations