Share
Explore

ETH test strategy

Overview

Tatum is a blockchain development platform that supports over 40 blockchain protocols. Ethereum is probably the most known chain. Using ETH APIs developers can build blockchain applications quickly and easily with fast adoption to blockchain technology. This document will cover ETH APIs testing for development usages to make sure all APIs were called and functioned as it should be. Besides confirming that virtual account is up to date with real blockchain ledger, outside transactions triggers the settlement between the off-chain and blockchain, and ledgers are created correctly at the virtual account. Also This document covers automation, security and performance testing.

Assumptions

This project follows the agile scrum methodology and the sprint span is 2 weeks.
Jira software would be used as a project management tool.
Swagger is used for APIs referencing documentation.
Test scripts will be created based on user stories or detailed requirements.
QA team consists of 2 members and the QA lead.
Configure sandbox/ test environment to try APIs
Quality gates along with the whole project will be mainly focused on:
Automatically run CI/ CD upon each development merge request to make sure new code doesn’t affect/ break old existing code, and that new built features are meeting the acceptance criteria.
Manually confirm that each user story/ API function shall meet the acceptance criteria about what it does before it moves to quality team.

Test Strategy

Early on before development starts, we start testing the APIs contract using Swagger interface and making sure that endpoints are correctly named, resources and types correctly reflect the object model, that there is no missing functionality or duplicate functionality, and that relationships between resources are reflected in the API correctly.
Then after development is completed we start rest of testing activities as below:

Continuous Testing using CI/CD

To make sure the application is stable we will depend on the CI testing to get immediate feedback on the quality of code as soon as any feature gets developed. Hence, we run our automated tests after each pull request; to detect defects as early as possible, minimize cost of diagnosing issues and take corrective actions from development side. Once the CI pipeline passes the PR is accepted and merged into the main branch, which will trigger a build that will run on the staging environment, which is a replica to the production environment. Once the build is completed successfully the CD pipeline will be triggered and run all the automated test cases against the staging environment. These test cases ensures the integrity of the entire build from testing old core functionality and newly added APIs or features.

Regression Testing

Regression Testing for our APIs is where we automate most of the testing efforts. As we know the expected results of each API response and how it should behave. So automating these test cases would be time saving and efficient. We run all the previously executed test cases on new build and before each release.

Test Phases

Functional Testing
For each API [GET/ POST] we need to take individual test actions as below:
Verify response payload. Check valid JSON body and correct field names, types, and values including error responses.
Verify correct HTTP status codes:
200 OK
400 Bad Request. Validation failed for the given object in the HTTP Body or Request parameters.
401 Unauthorized. Not valid or inactive subscription key present in the HTTP Header.
500 Internal server error. There was an error on the server while processing the request.
Requirements review
Based on approved SRS and stakeholder traceability matrix we test all new and changed functional and non-functional requirements in the context of the existing system requirements. Besides, making sure that all acceptance criteria for user stories are written against the requirements.
Integration Testing
Exercise interactions between APIs and blockchain nodes. Apply calling for GET and POST APIs and see how the blockchain nodes will respond.
Non-Functional Testing
Security Testing
Using Postman we start making sure that all APIs are passed from security checks as below:
Broken Object Level Authorization
Broken User Authentication
Broken Function Level Authorization
Injections
Load & Performance testing
Identify the testing environment.
We usually execute performance on staging environment to mirror production environment as much as possible in terms of volume of data and number of users, along with hardware resources and specs usage.
Identify performance metrics.
We define metrics and the success criteria of our performance testing:
Response times for end users.
Processing times.
Throughput number of operations/ transactions/ transfers per time.
Concurrency number of users working as how many active users at any point should the system handle.
Plan and design performance tests.
Identify performance test scenarios that take into account user variability, test data, and target metrics.
Configure the test environment.
Prepare the elements of the test environment and instruments needed to monitor resources.
Implement your test design.
Actual development of the planned/ designed tests.
Execute tests.
Run the performance tests, and monitor/ capture the data generated.
Analyze, report, retest.
Analyze the data and share the findings. Run the performance tests again using the same parameters and different parameters to make sure of the produced fixes.
Production deployment Testing
To check that the application is operating in the production environment as expected and as it was behaving on staging environment. Performed by developers and testers by going through checklist.

Defect Management

Here we agree on the defect management priorities/ severities mappings to make all parties aligned with the application status. As when a test is considered to have failed, a corresponding defect will be logged using Jira and would be associated to the belonging user story. This should be done by the tester who executed this test or encountered the failure in any other way.
Defect Priority [order of fixing]
Show stopper: This needs to be fixed right now, as it’s a blocker and we can’t either release with or proceed testing.
Urgent: Needs to be fixed before any other high, medium or low defect should be fixed.
High: Should be fixed as early as possible
Medium: May be fixed after the release/ in the next release.
Low: Fixing can be deferred until all other priority defects are fixed.
Defect Severity [impact of issues]
Critical: The defect affects critical functionality or critical data. It does not have a workaround.
High: The defect affects major functionality or major data. It has a workaround but is not obvious and is difficult.
Medium : The defect affects minor functionality or non-critical data. It has an easy workaround.
Low : The defect does not affect functionality or data. It does not even need a workaround. It does not impact productivity or efficiency. It is merely an inconvenience.

Exit Criteria

Here we define when the User Story can be considered as closed, and ready to be added to the current application.
Successful execution of the test script >> would be per each user story.
No open critical, major or average severity defects (unless the issue is determined to be low impact and low risk). And this would be determined by product owner before release.

Test Environment

Development: used for frequent, e.g., daily deployment of builds that passed CI automated tests but have not been subject to manual regression tests yet. It is used by developers and testers.
Staging: ideally, identical to the production environment, used for validation of the application and User Acceptance Testing. And here we run automated regression, performance and security tests before switching to production.
Production: where latest software versions or updates are pushed to users.

Testing Tools

For Performance and API testing >> JMeter
For Test Automation >> Postman & Cypress.
Test cases documentation will be done AIO Tests for Jira.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.