Article

Using Containers to Automate Your Development Environment

crew-4Hg8LH9Hoxc-unsplash-scaled-scaled

Beginning to work on an existing codebase can be daunting, but it can be even more time-consuming if the team hasn’t taken time to automate the creation of their development environment.

When you start working on an existing project, you’ll likely follow similar steps to the ones below to run the code on your machine:

  1. Download the source code
  2. Set up the development environment
  3. Build the software
  4. Run the software

Things can start to get complicated on step #2. A development environment is a collection of tools that a developer needs installed on their computer in order to run and work on the codebase. Having developers manually install these tools on their computers leads to a few problems:

  • Time – Installing and configuring all of that software can take a long time, sometimes days for each developer
  • Diversity – Each developer on your team could be running a different OS with different configurations. You won’t be able to include setup instructions for every developer’s setup.
  • Version Conflicts – What happens when this project needs a certain version of a dependency, but you already have a different version installed for working on a separate project?

How do containers help?

You can remedy these complications by packaging up a common development environment for all developers on your team to download and use. Containers allow you to bundle together an environment that includes everything your application requires to run, including an OS, tools, and dependencies. Virtual machines have been the tool of choice for this in the past, but containers offer the same environment isolation with a smaller image size, faster startup, and often better performance.

To demonstrate how you can use containers for this, we’re going to build a development environment for creating, validating, and deploying CloudFormation templates.

Strategy

Note: Docker is the most popular tool for building and running containers, so we’ll use it for this example. You can see Docker documentation for instructions on installing docker for your OS.

When using a container to create your development environment, you’ll need to answer two questions:

  1. How will I get data into and out of my container?
  2. How will I run commands inside of my container?

Data 

Your container image will contain an operating system and any software packages needed to work on your project. Any other files, such as source code and credentials, will live outside of the container. To make these files accessible inside the container we’ll use a docker volume.

In a container, any changes made to the file system are lost when the container exits unless those changes happen inside of a mounted volume. Any files that are created or modified inside of our source code directory will be saved outside of the container because we’re mounting it as a volume.

Running Commands

Running individual commands inside of a container can become tedious. For example, without a container you would run the following command to lint your CloudFormation file:

$ cfn-lint template.yaml

But to run the same command in a container you would have to use this command:

$ docker run --rm -it --workdir /mnt --volume .:/mnt my_container_name cfn-lint template.yaml

Because these commands can become so long, I recommend capturing them in a Makefile. You can also capture these commands into shell scripts, but a Makefile gives you a single place to capture all of your scripts and use variables between them. When you want to run your linter while using a Makefile, would use a command similar to:

$ make lint

Prerequisites 

A developer will need these already installed on their machine before creating their development environment:

  1. Docker
  2. GNU Make

Files 

$ tree .
.
├── credentials
├── template.yaml
├── Dockerfile
└── Makefile

Credentials

[default]
region = us-west-2
output = table
aws_access_key_id = 123456789012
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

A developer’s credentials are created on the developer’s machine, not stored in version control or inside the container. This is crucial so that secrets are not exposed to the public and so that each developer can have their own set of credentials.

In AWS, credentials can be stored in a number of ways. Here I’ll store my credentials in a file that will be mounted into our development environment.

template.yaml

---
Description: A short description of the project
Resources:
  Test:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: "10.0.0.0/16"
  # The rest of your infrastructure goes here…

We’re just going to deploy a single VPC to verify that the development environment is working, but you can replace this file with your own CloudFormation template

Dockerfile

# Start from a minimal OS containing Python 3.8
FROM python:3.8.0-alpine
 
# Install AWS tools
RUN pip install -q awscli aws-shell cfn-lint yamllint
 
# Change to the directory where the source code should be mounted
WORKDIR /project

Your Dockerfile contains instructions that tell Docker how to install the software that you want to include in your development environment

Makefile

# Variables
 
CONTAINER=aws_example
SOURCE_MOUNT=/proj
STACK_NAME=test
 
CREDENTIALS_FILE=$(shell pwd)/credentials
 
DOCKER_RUN=docker run 
  --rm 
  --it 
  --volume $(shell pwd):$(SOURCE_MOUNT) 
  --volume $(CREDENTIALS_FILE):/root/.aws/credentials:ro 
  $(CONTAINER)
 
# Targets
 
.PHONY: shell
shell:
  @$(DOCKER_RUN) aws-shell
 
.PHONY: setup
setup:
  @docker build -t $(CONTAINER) .
 
.PHONY: lint
lint:
  @$(DOCKER_RUN) yamllint template.yaml
  @$(DOCKER_RUN) cfn-lint template.yaml
 
.PHONY: deploy
deploy:
  @$(DOCKER_RUN) aws cloudformation create-stack --template-body file://template.yaml --stack-name $(STACK_NAME)
 
.PHONY: destroy
destroy:
  @$(DOCKER_RUN) aws cloudformation delete-stack --stack-name $(STACK_NAME)
  @echo "Deleted stack $(STACK_NAME)"

 

The DOCKER_RUN variable contains the command used to run other commands inside the docker container. It contains a few parameters:

  • --rm – Remove the container once the command exits
  • -it – Run the container in “interactive” and “terminal” mode
  • --volume – Mount a file or directory into the container

Workflow

Before you can use your container you have to build the container image:

$ make setup

This will create and save a container image on your machine. This has to be done before you can run anything within a container.

After you’ve created your container image, a typical CloudFormation workflow would look like this:

1. Validate the CloudFormation template

$ make lint

2. Deploy to AWS & manually verify that the infrastructure performs as expected

$ make deploy

3. Clean up – destroy the deployed infrastructure

$ make destroy

Congratulations, you’ve deployed and tested your AWS CloudFormation templates without installing any CloudFormation tools on your machine.

Bonus: It’s also helpful to be able to start a shell session inside of your development environment so you issue commands manually. To do this I’ve added a “shell” target to my Makefile. Here I’m using aws-shell as the shell, but you can also use sh or bash if you want to use non-AWS commands. Just make sure that the shell that you use is installed in your Docker image.

$ make shell
First run, creating autocomplete index...
Creating doc index in the background. It will be a few minutes before all documentation is available.
aws>

Conclusion

Automating the creation of a project’s development environment enhances the productivity of everyone on the team. Containers make this easy, performant, and work on any operating system and programming language.

Chaise-Conn-Default-BW_optimize.jpg

Chaise Conn

Full-Stack Developer

Chaise is a self-taught, Full-Stack Developer with expert knowledge in Python programming and Linux systems. After high school, he interned at a software company, managing relational databases, working with layered architecture and test automation, and developing in the .NET framework.

As he moved to front-end development, Chaise became adept at swiftly building responsive sites with clean code bases, working on projects for Microsoft, including Minecraft: Education Edition. He also has Agile experience and earned a Linux+ Certification.

In his free time, Chaise enjoys working on a number of personal projects, and attempting to perfect his Linux system.