Dockerizing Production Deploy

Jonathan Canaveral May 13, 2024

In 2013, software development was completely changed by a program called Docker. It used containerization to let developers pack their apps with all the needed dependencies into one easy-to-move unit that can work in any environment.

When it comes to deployment there is a lot of things you need to consider.

  • Price of the server usage
  • Deployment strategy
  • Rollback plan
  • Continuous deployment

And it can be done in different ways.

Platforms such as Heroku come with benefits including effortless scaling and deployment; however, there are disadvantages like vendor lock-in and cost issues as utilisation increases.

Using Capistrano with AWS provides flexibility and control over deployment processes, leveraging Capistrano’s automation capabilities with AWS’s scalable infrastructure. However, setup complexity and maintenance overhead can be challenges compared to platform-as-a-service solutions like Heroku.

The simplest and most cost-effective way that I have tried is using Docker + Linode

Combining Docker with Linode offers a versatile and customisable solution for deploying applications. Docker enables containerisation for consistent deployment across environments, while Linode provides robust virtual servers with flexibility in configuration and scaling options. However, managing infrastructure and Docker containers on Linode requires more technical expertise compared to managed platforms like Heroku.

Linode is a cloud-computing platform just like AWS but cheaper.

Why Docker?

Using docker on production deployments is very effective in small to medium-scale applications.

Assuming we are running a Rails application and we are using Postgresql as its database and Sidekiq for its background Jobs.

You can spin all of this in a single docker-compose.yml and a Dockerfile.

Dockerfile


# Use the official Ruby image as the base image
FROM ruby:latest

# Set environment variables for Rails
ENV RAILS_ENV=production \
    RAILS_LOG_TO_STDOUT=true \
    RAILS_SERVE_STATIC_FILES=true

# Set up the working directory in the container
WORKDIR /app

# Install dependencies
RUN apt-get update -qq && \
    apt-get install -y nodejs npm && \
    npm install -g yarn && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Copy Gemfile and Gemfile.lock into the container
COPY Gemfile Gemfile.lock ./

# Install gems
RUN gem install bundler && \
    bundle install --jobs 20 --retry 5

# Copy the rest of the application code into the container
COPY . .

# Precompile assets
RUN bundle exec rails assets:precompile

# Expose port 3000 to the Docker host
EXPOSE 3000

# Start the Rails server with Puma
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]

Docker-compose.yml


version: '3'

services:
  # PostgreSQL service
  db:
    image: postgres:latest
    volumes:
      - db-data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: myapp_production
    ports:
      - "5432:5432"

  # Redis service
  redis:
    image: redis:latest
    ports:
      - "6379:6379"

  # Rails app service
  web:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - .:/app
    ports:
      - "80:3000"
    depends_on:
      - db
      - redis
    environment:
      DATABASE_URL: "postgresql://myuser:mypassword@db/myapp_production"
      REDIS_URL: "redis://redis:6379/0"

volumes:
  db-data:

By running


docker compose --env-file /path/to/your/.env --build -d

Your application will be live on world-wide-web in no time.

In case you would like it done automatically, a GitHub action workflow can be triggered any time you create and publish a release tag. This workflow SSHes into your server, pulls the latest release tag, and then restarts your Docker services accordingly.


name: Deploy Latest Release

on:
  release:
    types: [published]

jobs:
  deploy:
    name: Deploy to Server
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Setup SSH
      uses: webfactory/ssh-agent@v0.5.3
      with:
        ssh-private-key: $

    - name: SSH into server and deploy
      run: |
        ssh -o StrictHostKeyChecking=no -p $ $@$ \
          'cd /path/to/your/repository && \
          git fetch --all && \
          latest_tag=$(git describe --tags `git rev-list --tags --max-count=1`) && \
          git checkout $latest_tag && \
          git pull origin $latest_tag && \
          docker-compose down && \
          docker-compose --env-file /path/to/your/.env --build -d'

Rolling is easy as well, you just revert the code you previously deployed and create another release tag for it.

Conclusion

However, deploying with docker seems scary and requires more technical skills regarding docker and GitHub actions, it is still a possible cost-effective and efficient way of handling your continuous deployment.

Once you have it all set up, all you need is to create a release tag and that’s it. It’s easy to monitor your container logs as well using docker container logs or just using `lazydocker`.

What about running my application in SSL?

Stay tuned to my next blog about Dokerizing Apps with SSL.