Intro to Simple Docker with Node & Koa

Docker notes

Overivew

Docker is a useful tool to run applications in isolated containers. You can configure which ports and/or directories are available to the host system. Docker’s best feature is the ability to run the same app (packed to container) with the same environment (os, libraries, runtimes, etc) on different hosts. Basically, this easily solves the problem of different versions of node/ruby/etc, specific “features”. Another advantage to dockerized applications is that they know nothing about host OS and have no ability to read/change host files (except shared files which you configured).

Docker also has the ability to run applications in stateless mode (with key --rm). I.e. each time an application is started, is just like the first time it was started.

A Non-obvious advantage of docker is using layers for a file system. Each layer stores only changes. As result, if you have 2 application containers based on the same image (for example Ubuntu ~ 800MB) you need only (800 + app1 diffs + app2 diffs) MB on your hard drive. Further, Docker can be configured to startup app containers on the host boot, restart terminated apps.

Tools: Docker-Compose

A very useful tool for docker is docker-compose. It allows you to run some docker containers with configured links between them (start orders, shared ports, directories ( “volumes” in terms of docker)). You only need to write the file docker-compose.yml and then run (and build if needed) your apps as docker compose up. Calling docker-compose down will stop all applications and remove allocated resources (private networks, containers, etc).

Continuing, docker-compose allows the use of shared yml-files between different docker-compose.yml files. It allows common settings to be stored in the same file and use different files docker-compose.yml for production and development.

All in all, docker-compose is must have for app development. It allows you to run your app and required database engine without installing development tools and database on the host.

Sample docker-compose.yml file:

version: '2'
services:
    # Our web application (will be available on http://localhost:3000)
    dev: 
        build: . # Build local Dockerfile with the app
        volumes:
          - ".:/src/app" # current directory on host OS will be mapped as /src/app on the container
        ports: 
            - "3000:3000" # open access to port 3000
        depends_on:
            - mongo # Mongo DB will be started before this app
        environment:
            - PORT=3000
            - DATABASE_URL=mongodb://mongo/db # Mongo db is available on host 'mongo' in this private network.
            - DEBUG=*
        restart: on-failure # restart the app on fail

    # Mongo DB
    mongo:
        image: mongo # Use official image with latest mongo db
        ports:
        volumes:
            - ./db:/data # to store databases in host OS directory ./db
        restart: on-failure

Tools: Docker-Machine

Another useful tool for docker is docker-machine. It allows you to work with the remote docker daemon (on remote machine, on virtual machine, on VPS) just like the local docker. For example: the command docker-machine create -d virtualbox default will create a virtual machine default on VirtualBox (it should be installed) with boot2docker.iso. Then, if you run: eval "$(docker-machine env default)" you can use docker and docker-compose like you would with local docker.

docker-machine used to be the only way to use docker on Windows and OS X. Now there are more “native” solutions for Windows 10 (Docker for Windows) and for OS X 10.10.3+ (Docker for OS X).

How To: Passing settings to contained applications

The easiest way to do that is using environment variables. You can pass them via -e NAME=VALUE or --env-file with docker. Or with settings environment or env_file in docker-compose.yml with docker-compose. If you need to use the config file you can put them to a specific directory on the host OS and then pass this directory as volume to container (like -v /path/to/conf/on/host:/conf with docker).

Usefull docker commands

Run NodeJS tests without nvm

cd /dir/with/nodejs/app
# Run tests on latest Node v4 (i.e. on Node 4.5)
docker run --rm -i -t -v .:/src -w /src node:4 npm install && npm test

# Run tests on latest Node v6
docker run --rm -i -t -v .:/src -w /src node:6 npm install && npm test

# Run tests on latest stable Node
docker run --rm -i -t -v .:/src -w /src node npm install && npm test

Build .Net Core project without installing .Net Framework and Visual Studio

cd /dir/with/file/project.json

docker run --rm -i -t -v .:/src -w /src microsoft/dotnet:1 dotnet restore && dotnet build -c Release

Run specific version MongoDB server by a command

docker run -d -p 27017:27017 mongo:3.2

# Now you can use mongo db on localhost (except with docker-machine) without any auth

# If you would like to have MongoDB after next boot boot of host OS (dockerd should be configured to be started automatically) run 
docker run -d -p 27017:27017 --restart=always mongo:3.2

Run specific version Postgresql server by a command

docker run -d -p 5432:5432 postgres:9.5

# Now you can use postgresql on localhost (except with docker-machine) with user `postgres` and password `postgres` and database `postgres`

# If you would like to have Postgresql  after next boot of host OS (dockerd should be configured to be started automatically) run 
docker run -d -p 5432:5432 --restart=always postgres:9.5

Run Bash command line inside container

docker exec -i -t  bash

# Now you can execute commands inside container's os

# For docker-compose (doesn't work on Windows yet)

 docker-compose exec  bash

Stop a container (a packed application)

docker kill  

Show log of a container (a packed application)

docker logs  

# To follow log output run
docker logs -f  

# You can use different logging drivers

Show ran containers (packed applications)

docker ps

Save a container to image (to publish it or use in children containers)

docker commit  
# After (if need) that you can dump this image to tar file
docker save -o image-file.tar 
# Then you can restore this image by
docker load -i image-file.tar

# To publish your image to https://hub.docker.com you should
# Login (only 1 time)
docker login
# Then push your image
docker push  

# Docker supports private images registries too.

docker-compose usage

# To build and run (install) containers  
docker-compose up

# Then to stop containers 
docker-compose stop

# To run them again
docker-compose start

# To stop and remove containers and related resources (uninstall)
docker-compose down

GUI tools for Docker

Kitematic

My thoughts about using Docker on development

I found on the internet a lot of solutions which recommend to build a unique container for app each time. But I see disadvantages there:
– You need to have ready Dockerfile to start app
– You must have a ready to run application
– You don’t have direct access to control application execution (you have to stop container to rebuild app, or use tools like nodemon, no direct access to app’s console)

I suggest don’t create Dockerfile at the begin (it will be created later if need).

Steps for new project

I will use nodejs here. But this solution can be applied for other languages and tools too.

1. Create empty directory for project and init source version control there

mkdir -p /path/to/project
cd /path/to/project
git init

2. Create docker-compose.yml which will run required databases and other services

version: '2'
services:
    app:
        image: node:4 # we will work with latest NodeJS 4.X
        volumes: # bind sources to /src
          - ".:/src"
        ports: 
            - "3000:3000" #open required ports here
        depends_on:
            - mongo
        environment: # add required environment variables if need
            - PORT=3000
            - DATABASE_URL=mongodb://mongo/db
        command: bash -c "sleep infinity"  # DON'T run app. Wait forever (Otherwise app's container will be stopped)
    
    # Required services
    mongo:
        image: mongo:3.2
        ports:
            - "27017:27017" # Optional, to have ability to connect to mongo db from host OS
        volumes:
            - ./.database:/data # Add .database to .gitignore (now database files will be stored in .database)
    # add other services if need

3. Run docker-compose

docker-compose up -d # run background

4. Connect to command line of app container and go to directory /src

docker-compose exec app bash

# Now in new bash session
cd /src

Now you have access to source directory from container’s shell.

5. Create app files. Install dependencies.

Now you can create any files which need to app in host OS using any text editor. From container shell you can run dependencies and run the app.

# Container's shell
npm init -y
npm install --save koa@next

Create in project directory file index.js with content

var Koa = require('koa');

var app = new Koa();

app.use(function(ctx, next){
  ctx.body = 'Hello from Koa.Next';
});

app.listen(process.env.PORT);

Run the app

# Container's shell
node index.js

Add another file(s), add another modules. From container’s shell you can run/stop the app, run tests, etc.

As result you can develop nodejs (and not only nodejs) application on computer without having NodeJS and MongoDB installed.

6. Create Docker file for production.

Create Dockerfile with content:

FROM node:4-onbuild #if you need Node 4.X
EXPOSE 3000
ENV PORT=3000

And create new docker-compose.production.yml for production mode.

version: '2'
services:
    app:
        build: .
        ports: 
            - "$PORT:3000" #open required ports here
        depends_on:
            - mongo
        environment: 
            - DATABASE_URL=mongodb://mongo/db
            - NODE_ENV=production
        restart: always
    # Required services
    mongo:
        image: mongo:3.2
        restart: always

Now on production server set environment variable PORT to required value (heroku does that itself) and run docker-compose up -d -f docker-compose.production.yml to run the packed app.