Today there are many CI/CD options. Most services include some CI/CD services in their free offer (like GitHub and GitLab). Alternatively you can host your solution with Jenkins and avoid using a online service. But even with such a fearse competition, I think CircleCI free offering is an interesting choice.
The documentation is amazing, it has SSH debugging, and lots of solutions that you can use (ie. AWS CLI ORB).
Getting Started
There are different ways to start. In a nutshell they will provide dockerized or VMs environments which you will orchestrate using YAML Configurations saved and described in the <project-repo>/.circleci/config.yml
file.
To me the best starting point is the CircleCI concepts documentation which includes super useful image describing YAML elements.
Once you understand that and have an idea of what you plan to do, my next step would be to simply use the CircleCI In-app Configuration Editor. The reason to edit within their environment is that since you are starting the online linter is going to be super-useful. This tool will save you LOTS of time and grief.
Using Circle next-gen Docker images
They have images for Go, NodeJS, Python, PHP, etc…
For example in the case of Node you can find the different version at cimg/node.
In my case, and just to match my local configuration I’m using:
jobs:
build:
docker:
- image: cimg/node:16.17.0
steps:
- checkout
- run: node --version
Customizable compute environments
In the case of Docker/Linux, and for the free accounts, CircleCI supports: Small, Medium, Medium+ & Large.
- small: 1 CPU, 2 GB RAM & 5 credits/min
- medium: 2 CPU, 4 GB RAM & 10 credits/min
- large: 4 CPU, 8 GB RAM & 20 credits/min
You should add the instance you’ll use in your config.yml
: Docker Execution Environment
jobs:
build:
docker:
- image: buildpack-deps:trusty
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
resource_class: xlarge
steps:
... // other config
But you can also use “reusable executors” within your workflow:
executors:
docker-executor:
docker:
- image: cimg/node:16.17.0
jobs:
build:
executor: docker-executor
steps:
- build
...
Syntax and particularities
change directory “doesn’t work”
The change directory “doesn’t work”, but this is an “anti-feature“.
Jobs do not keep “cd” between tasks. To do a “cd”, you’ll need to add it to your command line, ie.:
- run:
command: cd mypath ; npm run build
Avoiding problems when using commands with colons (“:”)
It is a problem with the YAML parsing, as described in YAML parser confused by colons in shell commands just replace:
- run:
name: Build header
command: chdr="'Authorization: Bearer SENDGRID_KEY'"
with:
- run:
name: Build header
command: |
chdr="'Authorization: Bearer SENDGRID_KEY'"
Debugging with SSH
The best way to debug CircleCI jobs is to access the environment with ssh access. Any failed pipeline can be re-launched with SSH access:
- Log into the CircleCI dashboard
- Click the FAILED job
- Click the down button on the top right portion of the dashboard, then select Rerun job with SSH.
Conditional building
I guess it is obvious, but I would rather flag that you should keep all KEYS outside your YAML files. For that you have for each project its settings: “Environment Variables”. Once declared, they are available for you to use.
Once your github repo (in my case) is connected to a CircleCI project, all commits will trigger the CircleCI associated tasks (described in <project-repo>/.circleci/config.yml
file). This means it is important to make sure you only trigger jobs for the branches you need:
...
workflows:
version: 2.1
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: main
In this example the “filters: branches: only: master” means the “deploy” job will only get triggered for the main branch, but the “build” job will be run for all commits.
You can get a bit more creative with this conditional steps in jobs and conditional workflows:
- when:
condition:
or:
- and:
- equal: [ main, << pipeline.git.branch >> ]
- or: [ << pipeline.parameters.param1 >>, << pipeline.parameters.param2 >> ]
- or:
- equal: [ false, << pipeline.parameters.param1 >> ]
steps:
- run: echo "I am on main AND param1 is true OR param2 is true -- OR param1 is false"
But I’m not sure how common this need is. In my case I did like filtering, but that was the limitation of my needs.
Another interesting feature is the when attribute which can also condition what gets done:
- run:
name: Upload CodeCov.io Data
command: bash <(curl -s https://codecov.io/bash) -F unittests
when: always # Uploads code coverage results, pass or fail
But this “when” only accepts values like: “always”, “on_success”, “on_fail”. Its intended purpose is to react to previous step results.
Persisting data between jobs
When you are building and deploying in separate jobs you’ll need to persist the data between them:
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10.16.3
working_directory: ~/repo
steps:
# Persist into the workspace for use in deploy job.
- persist_to_workspace:
root: public
paths:
- '*'
- checkout
# Pull the theme's submodule code
- run: git submodule sync
- run: git submodule update --init
# Download previously cached dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
# Update node modules
- run: npm install
# Install the hexo-cli
- run: sudo npm install -g hexo-cli
# Cache the current state of the node modules
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
# Generate the static files of the website
- run: hexo clean
- run: hexo generate
deploy:
docker:
- image: 'circleci/python:3.7.4'
working_directory: ~/repo
steps:
- attach_workspace:
at: public
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync public/ s3://agileexplorations.com --delete
workflows:
version: 2.1
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
This example uses persist to workspace & attach workspace.
These concepts are reviewed in more detail within Using Workspaces to Share Data between Jobs.
aws-s3 in circleci (and debugging)
There could be situations when your job doesn’t fail, but it doesn’t do what you expected. When this happens you could introduce an error and launch SSH, or upload to S3 your data in hopes it can help you find out what’s wrong.
Common tools (like AWS CLI) are packed into ORBs by either CircleCI or the community. In particular AWS CLI can be used as follows:
version: '2.1'
orbs:
aws-s3: circleci/aws-s3@3.0
jobs:
build:
docker:
- image: cimg/node:16.17.0
steps:
- checkout
- run:
name: Make sure we have a project folder
command: mkdir -p ~/project
- run: echo "lorem ipsum" > ~/project/build_asset.txt
# For this to work you'll need to have in your Environment Variables
# AWS_REGION, AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY
- aws-s3/copy:
arguments: '--no-progress'
from: ~/project/build_asset.txt
to: 's3://wk-circleci-logs/7sabores/'
- aws-s3/copy:
arguments: '--no-progress'
from: ~/project/netlify-deploy-output.txt
to: 's3://wk-circleci-logs/7sabores/'
workflows:
s3-example:
jobs:
- build
They also have the aws-s3/sync
command available. You can see an example in the ORB repo.