Skip to content

Containerization Of Automated Test Suite

What is Docker and How the Automated Test Suite Will Look After Containerization:


What is Docker? Docker is a containerization platform that packages an application together with its dependencies into a container image, so it can run consistently across different environments (local machines, CI runners, etc.). This reduces “works on my machine” issues and helps create reproducible environments that are easy to share and re-create.
Current Architecture (Before Containerization):
Automated test suite runs on GitHub Actions.
Before pushing to CI, tests are executed locally for validation, but local setup requires manually installing and maintaining multiple dependencies.
Target Architecture (After Containerization):
A single Docker image will bundle the automation runtime: Robot Framework + Browser library/Playwright + required tools and configs such as PowerCLI, etc
The same image will be used for:
Local execution
GitHub Actions execution
Diagram: Before vs After
Gemini_Generated_Image_87hh7s87hh7s87hh.jpg
Gemini_Generated_Image_oled17oled17oled.jpg

Current Scenario vs After Docker Implementation:


Aspect

Current Scenario

After Docker Implementation

Environment Setup

Manual setup required on local machines and CI runners
Single Docker image used across local and CI environments

Dependency Management

Multiple dependencies installed individually (Robot Framework, Browser library, PowerShell, Python, PowerCLI)
All dependencies bundled inside the Docker image

Consistency

Local vs CI differences can cause failures
Uniform runtime across stages reduces environment mismatch

Local Testing

Requires replicating CI environment manually
Local testing identical to CI environment using the same image

CI/CD Integration

GitHub Actions runs with manual setup steps
GitHub Actions runs inside container using the same docker image

Maintenance

Updating dependencies requires changes across multiple environments
Update Dockerfile → build new image tag → use everywhere

Debugging Ease

Debugging is complex; scripts may work locally but fail on runners, requiring investigation into missing libs, or environment mismatch or networking issues
Environment is identical locally and on runners, so failures are likely due to networking issues, making debugging easier
There are no rows in this table
Key Benefits
A) Simplified dependency management (Single source of truth)
All automation prerequisites are captured in the Dockerfile/image, reducing repeated setup and ensuring consistent dependency versions across local and CI.

B) Faster, simpler local environment setup

New team members (or anyone setting up a new machine) can run the test suite with Docker rather than manually installing multiple dependencies and matching versions.

C) Improved reliability by reducing environment-related failures

Because containers bundle dependencies into a stable runtime, failures caused by local/runner mismatch reduce significantly (more reproducible runs).

D) Better debugging for CI-only failures (especially networking)

When local and CI share the same container image, you can more quickly confirm whether issues are truly network/infra-related instead of dependency drift.

E) Rollback to a known-good environment

Container images can be treated as versioned snapshots of the entire test runtime. If a dependency update breaks the suite, rollback becomes “switch image tag and rerun,” rather than re-installing old dependencies manually.

F) Portability across CI platforms

Because the runtime is packaged, the same Docker image can run in other CI platforms (e.g., Jenkins/Azure DevOps) with minimal changes. The goal is consistent execution across environments and stages.

G) Cleaner execution model (less runner-specific scripting)

Execution moves from OS-specific scripts (e.g., CMD files and machine setup scripts) to a consistent “run the container” model, improving maintainability and reducing platform-specific glue code.

H) Cost and Resource saving

Ability to run multiple independent containers on a single EC2 host instead of multiple runners, thus ensuring use of provided runner hardware to its threshold.
Gemini_Generated_Image_45zjxk45zjxk45zj.png

Efforts Required to Make Changes in the Current Codebase:


To enable Docker-based execution of the automated test suite, the following changes are required:
Update GitHub Actions Workflow
Modify workflow YAML to pull and use the Docker image for test execution.
Ensure jobs reference the Docker container instead of installing dependencies manually.
Local Testing Using Docker
Provide instructions to run tests locally using the same Docker image.
Validate that local execution mirrors CI/CD behavior.
Version Control and Image Updates
Maintain Dockerfile in the repository.
Update the image whenever dependencies change to keep environments consistent.
Change EC2 Runner
Currently, we use a Windows runner for executing workflows.
For containerization, we need to switch to a Linux-based runner, as Docker is natively supported on Linux and offers better compatibility and performance for containerized workloads.

Disadvantages of Using Docker Compared to Current Scenario:


While Docker provides significant benefits, there are certain limitations and challenges compared to the current setup:
Headless Execution for Web Automation
Containers typically run in headless mode, meaning web automation will not be an interactive session.
However, this is mitigated by the ability to capture screenshots during execution, ensuring visibility of test results.
Resource Management on Shared Hosts
When running multiple containers on a single EC2 host, careful resource allocation is necessary to avoid contention.
Tested with the current EC2 runner and found out the current resource of runner is enough to run multiple containers.

Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.