Share
Explore

[Tech Spec] Cowrywise App CI

Overview

The CI flow runs on
. Here’s an
. For Every PR, there are 3 Steps. Tests are first run, then right after, we build the docker image and push to the Github docker registry. We then deploy the app to
and then notify the team on Slack.
After a PR is closed, the app deployment is deleted (S3 Bucket, EC2 instance, Elasticbeanstalk environment, etc). There are also 2 separate workflows for deploying to the staging environment and the production environment. The staging workflow connects to the staging box via SSH, pulls the up to date code from develop and then Restarts the app. It also connects to the task server box and does the same thing.

Description

This section describes each stage of all the workflow files and how the
work/what they do.

Continuous Integration: ci.yml

This is the main workflow file that runs on every PR. It has 3
Run Unit and Integration Tests
This pulls the repo into a docker container, builds the docker-compose stack, sets up the app and then runs tests via
pytest
. There is a step that tells the job to wait for 15 seconds before continuing. This is to make sure that the database is up and ready to accept connections before we migrate and start running tests. The pytest runner shows all successful and failed tests and if there are any failed tests, they are visible from the Github actions run log interface.

2.
Build to docker registry
This Job basically build the cowrywise app docker image and pushes to the Github’s Docker registry. This helps make the deployment stage faster as it would not need to run all the steps in the Dockerfile anymore to build the image because the image has already been built.

The resulting image is tagged with the git ref


3.
Deploy to test server
Now here’s where it gets tricky so follow closely. The plan is to deploy our built image to
.
What we would do here is give the EB deployment (environment and application) the same (or a similar) name as the branch so it is easily identifiable. This name would be useful for when we eventually want to terminate the environment (basically delete the deployed app)

The first step checks out the repo at the current branch via
actions/checkout@v1
. This action checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.

After that we import the action
rlespinasse/github-slug-action@v3.x
. What this does is provide a bunch of environment variables that we would need while naming our
(More on this later).
Here’s an example
GITHUB_EVENT_PULL_REQUEST_HEAD_SHA_SHORT: 72074095
GITHUB_REPOSITORY_OWNER_PART: cowrywise
GITHUB_REPOSITORY_NAME_PART: cowrywise-app
GITHUB_REPOSITORY_SLUG: cowrywise-cowrywise-app
GITHUB_REPOSITORY_OWNER_PART_SLUG: cowrywise
GITHUB_REPOSITORY_NAME_PART_SLUG: cowrywise-app
GITHUB_REPOSITORY_SLUG_URL: cowrywise-cowrywise-app
GITHUB_REPOSITORY_OWNER_PART_SLUG_URL: cowrywise
GITHUB_REPOSITORY_NAME_PART_SLUG_URL: cowrywise-app
GITHUB_REF_SLUG: 78-merge
GITHUB_HEAD_REF_SLUG: refactor-api-v2-plangroups
GITHUB_BASE_REF_SLUG: develop
GITHUB_REF_SLUG_URL: 78-merge
GITHUB_HEAD_REF_SLUG_URL: refactor-api-v2-plangroups
GITHUB_BASE_REF_SLUG_URL: develop
GITHUB_SHA_SHORT: 70d2e62f

Next we create the
using
hmanzur/actions-aws-eb@v1.0.0

-
name
: Create EB Application
uses
: hmanzur/actions-aws-eb@v1.0.0
with
:
command
: init -r us-east-2 pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }} -p multi-container-docker-19.03.13-ce-(generic)
env
:
AWS_ACCESS_KEY_ID
: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY
: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION
: "us-east-2"

What the
command
key translates to is

eb init -r us-east-2 pr-
refactor-api-v2-plangroups
-p multi-container-docker-19.03.13-ce-(generic)

This creates an eb application in the zone us-east-2 that would run on the Docker Multi Container

For our application to run, we need to attach it to an elastic beanstalk environment which is basically a collection of resources needed to run the application. An environment can run only one version of an app at at time.

-
name
: Create EB Environment
uses
: hmanzur/actions-aws-eb@v1.0.0
continue-on-error
: true
with
:
command
: create pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }} -r us-east-2 --sample --cfg cowrywise -c cowrywise-pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }}
env
:
AWS_ACCESS_KEY_ID
: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY
: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION
: "us-east-2"

The command run here is

eb create pr-
refactor-api-v2-plangroups
-r us-east-2 --sample --cfg cowrywise -c cowrywise-pr-
refactor-api-v2-plangroups

As expected, this creates an environment named
pr-refactor-api-v2-plangroups
. ーcfg attribute allows us specify a configuration file that we want to use to create the environment. The one used here is present in the folder
/.elasticbeanstalk/saved_configs

Contents of
cowrywise.cfg.yml

EnvironmentConfigurationMetadata:
Description: Cowrywise test app config
DateCreated: '1613915402000'
DateModified: '1613915402000'
Platform:
PlatformArn: arn:aws:elasticbeanstalk:us-east-2::platform/Multi-container Docker running on 64bit Amazon Linux/2.25.0
OptionSettings:
aws:ec2:instances:
InstanceTypes: t2.large, t2.small
aws:elasticbeanstalk:environment:
ServiceRole: arn:aws:iam::183864691489:role/aws-elasticbeanstalk-service-role
LoadBalancerType: application
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: ci-eb
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0

Important things to note from here
1. The platform is a multi-container docker platform because we have two images we would be deploying - The cowrywise app and a mysql image

2. A t2.large ec2 instance is created for the EB app. t2.large is an 8gb 2 cpu box.


The -c attribute allows us to specify what sub domain name we want to use for the environment. In this case, its
cowrywise-pr-refactor-api-v2-plangroups
the resulting URL would be
Making it unique enough for us.

After this step is run, our we can then deploy our application.

Since we can not create an environment with a name that already exists, we add
continue-on-error
: true
to the
”Create EB Environment”
Step. This would allow us continue with the workflow if an error occurs due to the environment already existing rather than terminating the workflow.

We do not need to put the flag on the previous step, because if the application already exists it does not raise any error.

There is a file in the work directory called
Dockerrun.aws.json
. This is the equivalent of
docker-compose.yml
to the EB ecosystem. It describes how to deploy a set of docker containers as an elastic beanstalk application.

Here’s a
on how to set it up, the required keys and what each key represents.

If you look at the file
Dockerrun.aws.json
you would notice that tag for the image used in
cws-app
reads
PRNUMBER
which is not a valid tag. The tag is the Github ref as described earlier.

"image"
:
"docker.pkg.github.com/cowrywise/cowrywise-app/adashi-build:PRNUMBER"
,

So that we can programatically set the version of the image to use for each deployment in the file, we use the action in the next step
”Find and Replace image tag”

jacobtomlinson/gha-find-replace@master
allows us to replace some part of a text file with another string.
So what we do is replace the
PRNUMBER
in
Dockerrun.aws.json
with the correct Github ref for the branch

- uses: actions/checkout@v2
- name: Find and Replace Docker image tag
uses: jacobtomlinson/gha-find-replace@master
with:
find: "PRNUMBER"
replace: "pr-${{ env.GITHUB_REF_SLUG }}"
include: "Dockerrun.aws.json"

Next we create a deployment archive that we would pass to the action that deploys to elasticbeanstalk

- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy@v16
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }}
environment_name: pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }}
version_label: "${{ env.GITHUB_REF_SLUG }}-${{ steps.format-time.outputs.replaced }}"
region: us-east-2
deployment_package: deploy.zip
wait_for_environment_recovery: 200

Application name and Environment name are gotten from the Github ref as well - Which is what we use to create them initially.

After this deployment is done, We send a notification to #engineeringteam on slack with the deployment url
- name: Slack Notification
uses: rtCamp/action-slack-notify@v2
continue-on-error: true
env:
SLACK_CHANNEL: engineeringteam
SLACK_COLOR: '#0066F5'
SLACK_ICON: https://firebasestorage.googleapis.com/v0/b/cowrywise.appspot.com/o/beanstalk.png?alt=media&token=91b9d0e0-fe09-45ba-a378-0f8e8b1dcb04
SLACK_MESSAGE: 'App deployed to http://cowrywise-pr-${{ env.GITHUB_HEAD_REF_SLUG_URL }}.us-east-2.elasticbeanstalk.com'
SLACK_TITLE: Deployment URL
SLACK_USERNAME: New Deployment
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}


Screenshot 2021-02-22 at 02.47.16.png



Clear Merged Deployments: clear_deployments.yml

This workflow contains just one job that runs when a PR is closed -
Delete App Deployments from AWS

on:
pull_request:
types: ["closed"]
branches-ignore:
- 'master'

First we inject the environment variables from
rlespinasse/github-slug-action@v3.x
, right after we initialise the app here. Since the app exists already, it just imports its configuration.

Right after we delete the environment. This removes everything that was created to make the deployment work, including any ec2 and s3 instance and all its contents.

eb terminate --all --force

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.