My personal notes, projects and configurations.
Aditya Hajare (Linkedin).
WIP (Work In Progress)!
Open-sourced software licensed under the MIT license.
- Must Check Links
- Docker Installation Tips
+ Installing on Windows 10 (Pro or Enterprise) + Installing on Windows 7, 8, or 10 Home Edition + Installing on Mac + Installing on Linux + Play With Docker (PWD) Online + Install using get.docker.com
- Theory
- Important Points To Remember - Difference between Containers and Virtual Machines (VMs) - To see what's going on in containers - Docker netwroks concepts for Private and Public communications - Docker netwroks CLI management of Virtual Networks - Docker networks: Default Security - What are Images - Image Layers - Docker Image tagging and pushing to Docker Hub - Dockerfile - Inside Dockerfile - To build Image from Dockerfile
- Cleaning Up Docker
+ To cleanup all dangling images: + To cleanup everything: + To see space usage:
- Container Lifetime And Persistent Data
+ Data Volumes - Named Volumes - When would we ever want to use 'docker volume create' command? - Data Volumes: Important Docker Commands + Bind Mounts - Bind Mounts: Important Docker Commands
- Docker Compose - The Multi-Container Tool
+ docker-compose.yml + docker-compose CLI + docker-compose to build Images at runtime
- Docker Swarm - Built-In Orchestration
+ How to check if swarm mode is activated and how to activate it + What happens behind the scene when we run docker swarm init? + Key Concepts + Creating a 3-node Swarm Cluster
- Swarm - Scaling Out With Virtual Networking
+ Overlay Network Driver + Example: Drupal with Postgres as Services
- Swarm - Routing Mesh
+ Docker service logs to see logs from different nodes - Swarm - Stacks
+ How to deploy Swarm stack using compose file? - Swarm - Secret Storage
+ What is a Secret? + How to create a Secret? + How to decrypt a Secret? + How to remove a Secret?
- Swarm - Service Updates Changing Things In Flight
+ Swarm Update Examples - Docker Healthchecks
+ Where do we see Docker Healthcheck status? + Healthcheck Docker Run Example + Healthcheck in Dockerfile
- Container Registeries
+ Docker Hub + Running Docker Registry + Running A Private Docker Registry + Registry And Proper TLS + Private Docker Registry In Swarm
- Kubernetes
+ What is Kubernetes + Why Kubernetes + Kubernetes vs. Swarm
- Kubernetes Installation And Architecture
+ Kubernetes Installation - Docker Desktop - Docker Toolbox on Windows - Linux or Linux VM in Cloud - Kubernetes In A Browser + Kubernetes Architecture Terminology
- Kubernetes Container Abstractions
+ Kubernetes Container Abstractions + Kubernetes Run, Create and Apply
- Kubernetes - Basic Commands
+ Creating First Pods - nginx + Scaling Replica Sets - Apache Httpd + Inspecting Kubernetes Objects - Apache Httpd
- Kubernetes Services
+ Kubernetes Services - ClusterIP (default) + Kubernetes Services - NodePort + Kubernetes Services - LoadBalancer
- Kubernetes Management Techniques
+ Run, Create, Expose Generators + Generators Example + Imperative vs. Declarative + Imperative Kubernetes + Declarative Kubernetes + Three Management Approaches
- DevOps Style Kubernetes Using YAML
+ Using kubectl apply + Kubernetes Configuration YAML + How To Build YAML File + Dry Runs With Apply YAML + Labels And Annotations
- Kubernetes FAQ
+ What is Kubernetes + Difference between Docker Swarm and Kuberentes
- Generic Examples
+ Running 3 Containers: nginx (80:80), mysql (3306:3306), httpd (Apache Server - 8080:80) + To clean up apt-get cache + To get a Shell inside Container + To create a temp POD in cluser and get an interactive shell in it + Docker Swarm - Create Our First Service and Scale it Locally + Creating a 3-Node Swarm Cluster + Scaling Out with Overlay Networking + Scaling Out with Routing Mesh + Create a Multi-Service Multi-Node Web App + Swarm Stacks and Production Grade Compose + Using Secrets in Swarm Services + Using Secrets with Swarm Stacks + Create A Stack with Secrets and Deploy + Service Updates: Changing Things In Flight + Healthchecks in Dockerfile
- How DNS works? DNS basics:
- Round-Robin DNS, what is it:
- Official Docker Image specifications:
- List of official Docker Images:
- The Cloud Native Trail Map is CNCF's recommended path through the cloud native landscape. The cloud native landscape, serverless landscape, and member landscape are dynamically generated on this website:
- The 12-Factor App. Key to Cloud Native App Design, Deployment, and Operation.
- 12 Fractured Apps.
YAMLquick reference:- https://yaml.org/refcard.html
- Sample
yamlfile. Generic: https://yaml.org/start.html
docker-composetool download forlinux:- Only one host for production environment. What to use: docker-compose or single node swarm?
- An introduction to immutable infrastructure.
- MacOS shell tweaking:
- MacOS - Commands for getting into local Docker VM:
- Windows - Commands for getting into local Docker Moby VM:
- Docker Internals - Cgroups, namespaces, and beyond: what are containers made from?:
- Windows Containers and Docker: 101:
- Heart of the SwarmKit Topology Management (Youtube & slides):
- Swarm Mode Deep Dive:
- Raft Consensus Visualization (Our Swarm DB and how it stays in sync across nodes):
- Docker Swarm Firewall Ports:
- How To Configure Custom Connection Options for your SSH Client:
- Create and Upload a SSH Key to Digital Ocean:
- Only one host for production environment. What to use: docker-compose or single node swarm?
- Kubernetes Components:
MikroK8sfor Linux Hosts:MinikubeDownload:- Install
kubectlon Windows when you don't haveDocker Desktop: Kubernetes Service:Kubernetes Namespaces:Kubernetes Pod Overview:kubectlforDocker Users:kubectlCheat Sheet:Stern (Multi pod and container log tailing for Kubernetes)for better multi-node log viewing at the CLI:- What is
Kubernetes Service: Kubernetes Service Types:- Using a
Kubernetes Serviceto Expose Our App: Kubernetes NodePort Service:CoreDNSforKubernetes:KubernetesDNS Specifications:
+ Installing on Windows 10 (Pro or Enterprise)- This is the best experience on Windows, but due to OS feature requirements, it only works on the Pro and Enterprise editions of Windows 10 (with latest update rollups). We need to install Docker for Windows from the Docker Store.
- With this Edition we should use PowerShell for the best CLI experience.
- Install Docker Tab Completions For PowerShell Plugin.
- Useful commands:
docker version docker ps docker info
+ Installing on Windows 7, 8, or 10 Home Edition- Unfortunately, Microsoft's OS features for Docker and Hyper-V don't work in these older versions, and
Windows 10 Homeedition doesn't have Hyper-V, so we'll need to install the Docker Toolbox, which is a slightly different approach to using Docker with a VirtualBox VM. This means Docker will be running in a Virtual Machine that sits behind the IP of our OS, and uses NAT to access the internet. - NOTE FOR TOOLBOX USERS: On localhost, all urls that use
http://localhost, we'll need to replace withhttp://192.168.99.100 - Useful commands:
docker version docker-machine ls docker-machine start docker-machine help docker-machine env default
+ Installing on Mac- We'll want to install Docker for Mac, which is great. If we're on an older Mac with less than
OSX Yosemite 10.10.3, we'll need to install the Docker Toolbox instead. - Useful commands:
docker version docker container docker container run -- docker docker pause
+ Installing on Linux- Do not use built in default packages like
apt/yum install docker.iobecause those packages are old and not the Official Docker-Built packages. - Prefer to use the Docker's automated script to add their repository and install all dependencies:
curl -sSL https://get.docker.com/ | shbut we can also install in a more manual method by following specific instructions on the Docker Store for our distribution, like this one for Ubuntu. - Useful commands:
# http://get.docker.com curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh sudo usermod -aG docker bret sudo docker version docker version sudo docker version docker-machine version # http://github.com/docker/compose # http://github.com/docker/compose/releases curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose- `uname -s `- `uname -m` >/usr/local/bin/docker-compose docker-compose version # http://github.com/docker/machine/releases docker image docker image ls --
+ Play With Docker (PWD) Online- The best free online option is to use play-with-docker.com, which will run one or more Docker instances inside our browser, and give us a terminal to use it with.
- We can actually create multiple machines on it, and even use the URL to share the session with others in a sort of collaborative experience.
- It's only limitation really is it's time bombed to 4 hours, at which time it'll delete our servers.
+ Install using get.docker.com- Go to https://get.docker.com and read the instructions.
- Important Points To Remember- Forget IP's: Static IP's and using IP's for talking to Containers is an
anti-pattern. Always try best to avoid it! - Docker daemon has a built-in DNS server that Containers use by default.
- Docker defaults the
hostnameto the Container's name, but we can also set aliases. - Containers shouldn't rely on IP's for inter-communication.
- Make sure that we are always creating custom networks instead of using default ones.
Alpineis a destribution of linux which is very very small in size. i.e. less than5 mb.
- Difference between Containers and Virtual Machines (VMs)- Containers:
- Containers aren't Mini-VM's.
- Containers are just processes. They are processes running in our host OS.
- Containers are limited to what resources they can access.
- Containers exit when process stops.
- A VM provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS.
- A para-virtualized VM environment provides an abstract hardware abstraction layer (HAL) that requires HAL-specific device drivers.
- Typically a VM will host multiple applications whose mix may change over time versus a container that will normally have a single application. However, it’s possible to have a fixed set of applications in a single container.
- Containers provide a way to virtualize an OS so that multiple workloads can run on a single OS instance.
- With VMs, the hardware is being virtualized to run multiple OS instances.
- Containers’ speed, agility, and portability make them yet another tool to help streamline software development.
- To see what's going on in containers- List all processes in one container:
docker container top
- To see details of specific container config:
docker container inspect
- To see live performance stats for all containers:
docker container stats
- Docker netwroks concepts for Private and Public communications- When we start a Container, in the background we are connecting to a particular Docker network. And by default that is the
bridgenetwork. - Each Container is connected to a private virtual network
bridge. - Each virtual network routes through
NAT Firewallon host IP. - All Containers on a virtual network can talk to each other without
-p. - Best Practice is to create a new virtual network for each app. For e.g.
- Network
my_apiformongoandnodejscontainers. - Network
my_web_appformysqlandphp/apachecontainers.
- Network
- Use different
Docker Network Driversto gain new abilities. -pis always inHOST:CONTAINERformat. For e.g.# In below command, '-p 80:80' means forward traffic of port 80 of 'host' to port 80 of container. docker container run -p 80:80 --name webhost -d nginx- To see information about published ports for any Container i.e. which ports of Container are listening to which ports of
host:# 'webhost' is the name of our already running nginx container. docker container port webhost - To know the IP address of running Container using
inspectcommand:# 'webhost' is the name of our already running nginx container. docker container inspect --format "{{ .NetworkSettings.IPAddress }}" webhost
--network bridgeis the default Docker Virtual NetworkNAT'edbehind thehostip.--network hostgains performace by skipping virtual networks but sacrifices security of container model.--network noneremoveseth0and only leaves us withlocalhostinterface in Container.Network Driversare built-in or 3rd party extensions that gives usVirtual Networkfeatures.
- Docker netwroks CLI management of Virtual Networks-
To list/show all networks:
docker network ls
-
To
inspecta network:docker network inspect NETWORK_ID
-
To
createa network:docker network create --driver
-
To
attacha network to Container:docker network connect
-
To
detach/disconnecta network from Container:docker network disconnect
- Docker networks: Default Security- While creating apps, we should make
frontend, backendsit on same Docker network. - Make sure that their (frontend, backend) inter-communication never leaves host.
- All externally exposed ports are closed by default in Containers.
- We must manually expose ports using
-poption, which is better default security! - This gets even better with
SwarmandOverlay Networks.
- What are ImagesImagesare nothing but application binaries and dependencies for our apps and the metada -how to run it.Official Definition: An image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a Container runtime.- Inside an
Image, there's no complete OS. No kernel and kernel modules (e.g. drivers). It contains just binaries that our application needs. It is because thehostprovides thekernel. - Image can be small as one file (our app binary) like a
golangstatic binary. - Or an Image can be big as a
Ubuntu distrowithaptandApache,PHPand more installed. - Images aren't necessarily named, Images are
tagged. And a version of an Image can have more than 1tag. - To pull the specific version of Image:
docker pull nginx:1.17.9 # Or to pull the latest version of any image docker pull nginx:latest - In production, always lock version by specifying exact version number.
- Image Layers- This is a fundamental concept about how Docker works.
- Images are made up of file system changes and metadata.
- Each layer is uniquely identified (SHA) and only stored once on a
host. This saves storage space onhostand transfer time onpush/pull. - A Container is just a single
read/write layeron top of Image. - Docker uses a
Union File Systemto present it's series of file system changes as an actual file system. - Container runs as an additional layer on top of an Image.
- Images are designed using
Union File Systemconcept to make layers about the changes. - Use
docker historycommand to see layers of changes made in image:docker history IMAGE_NAME - Each layer has a unique SHA associated with it.
- Copy on Write: When a change is made into some file in base image, Docker will copy that file from base image and put it in Container layer itself.
- To see the JSON metadata of the Image:
docker image inspect IMAGE_NAME
- Docker Image tagging and pushing to Docker Hub- Images don't technically have names. They have
tags. When we dodocker image ls, there's noname column, instead there istagcolumn. latesttag doesn't always means latest version of that Image. It's just the default tag, but Image owners should assign it to the newest stable version.- We refer to Image with 3 distinct categories:
<user>/<repo>:<tag>.<repo>is made of either an organisation name or username.
- Official Repositories live at the
Root Namespaceof the registery, so they don't need account name in front of repo name. tagis just a pointer to a specific image commit, and really could be anything into that repository.- To
re-tagand existing image:# Assuming 'mysql' image already exists in our system. docker image tag mysql adityahajare/mysql adityahajare/latestmysql adityahajare/additionaltagname - To push our own Image:
# Uploads changed layers to a image registery. docker image push TAG_NAME # For e.g. docker image push adityahajare/mysql
- If we get
Access Deniederror, we need to login with our Docker Hub account. To login:docker login
docker logindefaults to logging into aDocker Hub, but we can modify by addingserver url. Do following to see default:cat .docker/config.json
- NOTE:
Docker For MACnow stores this auth intoKeychainfor better security.
- NOTE:
- Always logout from shared machines or servers when done, to protect our account.
- To make a
privaterepository, login to Docker Hub and create the private repo first and then push Image to it.
- DockerfileDockerfileis a recipe to create Image.Dockerfileis not a shell script or a batch file it's a totally different language of file that's unique to Docker and the default name isDockerfilewith a capitalD.- From a command line, whenever we need to deal with a
Dockerfileusing thedockercommand, we can actually use the-f(which is common amongst lot of tools with Docker) option to specify a different file than defaultDockerfile. For e.g.docker build -f SOME_DOCKER_FILE
- Inside DockerfileFromcommand:- It's in every
Dockerfileand required to be there. - It denotes a minimal distribution For e.g.
debian,alpineetc. - One of the main benifits to use these distributions in Containers is to use their
package distribution systemsto install whatever software we need in our packages. Package Manager:package managerslikeaptandyumare one of the reasons to build Containers fromdebian,ubuntu,fedoraorcentos.
- It's in every
Env:- Optional block.
- It's a way to set environment variables.
- One reason they were chosen as preferred way to inject
key/valueis they work everywhere, on every OS and config.
Run:- Optional block.
- Used to execute shell commands inside Container. It is used when we need to install software with a package repository, on we need to do some
unzippingor some file edits inside the Container itself. Runcommands can also runshell scripts, or any commands that we can access from inside the Container.Dockerfilecan have multipleRuncommand blocks.- All commands are run as
root. This is a common problem in Docker. If we are downloading any files usingruncommand and if those files require different permissions then we will have to run another command to change it's permissions. For e.g:
# -R means recursively. # Syntax: chown -R USER:GROUP DIRECTORY chown -R www-data:www-data bootstrap
Expose:- Optional block.
- By default no
TCPorUDPports are open inside a Container. - It doesn't expose anything from the Container to a
virtual networkunless we list it underExposeblock. Exposecommand does not mean thoseportswill be opened automatically on ourhost.- We still have to use
-pwithdocker runto open up these ports. - By specifying
portsunderExposeblock, we are only allowing Containers to receive packets comming at these ports.
WORKDIR:- Optional block.
- Used to change
Working Directory. - Using
WORKDIRis preferred over usingRUN cd /some/path.
COPY:- Optional block.
- Used to copy files/source code from our local machine, or
build servers, into our Container Image.
CMD:- It is a required parameter in every
Dockerfile. - It is the final command that will be run every time we launch a new Container from the Image, or every time we restart a stopped Container.
- It is a required parameter in every
- To build Image from Dockerfile- To build an Image from
Dockerfile:# '-t' to specify tag name. # '.' says Dockerfile is in current directory location. docker image build -t SOME_TAG_NAME .
- We can use
prunecommands to clean upimages,volumes,build cache, andcontainers. - Useful YouTube video about
prune: https://youtu.be/_4QzP7uwtvI
+ To remove all containers and images: # Unix
docker rm -vf $(docker ps -a -q) # delete all containers including its volumes
docker rmi -f $(docker images -a -q) # delete all the images
# Windows (PowerShell)
docker images -a -q | % { docker image rm $_ -f }+ To cleanup all dangling images: # We can use '-a' option to clean up all images.
docker image prune+ To cleanup everything: docker system prune --all+ To see space usage: docker system df- If we're using
Docker Toolbox, theLinux VMwon't auto-shrink. We'll need to delete it and re-create (make sure anything in docker containers or volumes are backed up). We can recreate thetoolbox default VMwith following commands:
docker-machine rm
docker-machine create- Containers are usually meant to be
immutableandephemeral. i.e. Containers areunchanging,temporary,disposableetc. - Best Practice: Never update application in Containers, rather replace Containers with new version of application.
- The idea of Containers having
Immutable Infrastructure(Only re-deploy Containers, never change), simply means that we don't change things once they're running. And if aconfigchange needs to happen, or may be theContainer Versionupgrade needs to happen, then weredeploya whole new Container. - Docker provides 2 solutions for
Persistent Data:Data Volumes.Bind Mounts.
+ Data Volumes-
Docker Volumesare a special option for Containers which creates a special location outside of that Container'sUFS (Union File System)to storeunique data. -
This preserves it acrosss Container removals and allows us to attach it to whatever Container we want.
-
The Container just sees it like a local file path or a directory path.
-
Volumes need manual deletion. We can't just clear them out by removing a Container.
-
We might want to use following command to
cleanupunused volumes and make it easier to see what we're doing there.docker volume prune
-
A friendly way to assign new volumes to Container is using
named volumes.- Named Volumes- It provides us an ability to specify things on the
docker runcommand. -vallows us to specify either anew volumewe want to create or a named volume by specifying volume name attached by colon. For e.g.
# Check '-v mysql-db:/var/lib/mysql' # 'mysql-db' is a volume name. docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=true -v mysql-db:/var/lib/mysql mysql
Named Volumesallows us to easily identify and attach same volumes to multiple Containers.
- When would we ever want to use 'docker volume create' command?- There are only few cases when we have to create
volumesbefore we run Containers. - When we want to use
custom driversandlabelsforvolumes, we will have to create themvolumesbefore we run our Containers.
- Data Volumes: Important Docker Commandsdocker pull mysql docker image inspect mysql docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql docker container ls docker container inspect mysql docker volume ls docker volume inspect TAB COMPLETION docker container run -d --name2 mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql docker volume ls docker container stop mysql docker container stop mysql2 docker container ls docker container ls -a docker volume ls docker container rm mysql mysql2 docker volume ls docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql docker volume ls docker volume inspect mysql-db docker container rm -f mysql docker container run -d --name mysql3 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql docker volume ls docker container inspect mysql3 docker volume create --help
- It provides us an ability to specify things on the
-
-vcommand is not compatible withdocker services. To usevolumeswithdocker services, we have to use--mountcommand and specify verious required options with it. For e.g. Creating avolumeforpostgres service:docker service create --name db --network backend -e POSTGRES_HOST_AUTH_METHOD=trust --mount type=volume,source=db-data,target=/var/lib/postgresql/data postgres:9.4
+ Bind Mounts-
Bind Mountsare simply us sharing or mounting ahost directory, orfile, into a Container. -
In other words,
Bind Mountsmaps a host files or directories to a Container file or directory. -
The Container just sees it like a local file path or a directory path.
-
In the background, it's just 2 locations pointing to the same file(s).
-
Skips
UFS (Union File System)andhostfiles overwrite existing (if any) in Container. -
Since
Bind Mountsarehostspecific, they need specific data to be on the hard drive of thehostin order to work:- We can only specify
Bind Mountsatdocker container runcommand. - We cannot specify
Bind MountsinDockerfile.
- We can only specify
-
It's similar to creating
Named Volumeswith-vcommand. Only difference is - instead ofnamed volume name, we specifyfull pathbefore colon. For e.g.# Windows: # Check '-v //c/Users/Aditya/stuff:/path/container/' # '//c/Users/Aditya/stuff' is a full path docker container run -v //c/Users/Aditya/stuff:/path/container/ IMAGE_NAME # Mac/Linux: # Check '-v /Users/Aditya/stuff:/path/container/' # '/Users/Aditya/stuff' is a full path docker container run -v /Users/Aditya/stuff:/path/container/ IMAGE_NAME
-
NOTE: Docker identifies difference between
Named VolumesandBind Mountssince there is forward slash (In Windows, there are 2 forward slashses) when we set-voption value. -
Bind Mountsare great for local development, local testing.- Bind Mounts: Important Docker Commandspcat Dockerfile docker container run -d --name nginx -p 80:80 -v $(pwd):/usr/share/nginx/html nginx docker container run -d --name nginx2 -p 8080:80 nginx docker container exec -it nginx bash
Docker ComposeWhy's:- Helps configure relationships between Containers.
- Allows us to save our Docker Container
runsettings in easy-to-read file. - With
Docker Compose, we can create one-liner developer environment startups.
- There are 2 parts to Docker Compose:
YAMLformatted file that describes our solution options for:- Containers
- Networks
- Volumes
- Environment Variables
- Images
- CLI tool
docker-compose:- Used for local dev/test automation with those
YAMLfiles to simplify our Docker commands.
- Used for local dev/test automation with those
+ docker-compose.yml- It was originally called
Fig(years a go). - Compose YAML format has it's own versions. For e.g.
1, 2, 2.1, 3, 3.1etc. - It can be used with
docker-composecommand for local Docker automation or can now be used (v1.13 and above) directly with the Docker command line inproductionwithswarm. docker-compose.ymlis the default filename, but any other filename can be used withdocker-compose -foption, as long as it's a properYAML.
+ docker-compose CLIdocker-composeCLI tool comes with Docker forwindowsandmacas well as Toolbox, but there's a separate download ofdocker-composeCLI forlinux.docker-composeCLI is not aproduction-gradetool but ideal for local development and test.- Two common commands that we use are:
docker-compose up # Setup Volumes, Networks and start all Containers. docker-compose down # Stop all Containers and remove Containers, Volumes and Networks pcat docker-compose.yml docker-compose up docker-compose up -d docker-compose logs docker-compose --help docker-compose ps docker-compose top docker-compose down
- If all our projects had
Dockerfileanddocker-compose.ymlthen anew developer onboardingwould be running just following 2 commands:git clone github.com/some/project docker-compose up
+ docker-compose to build Images at runtime- Another thing
docker-composecan do is build our Images at runtime. docker-composecan also build our custom Images.- It will look in the
cachefor Images, and if it has build options in it, it will build the Image when we use thedocker-compose upcommand. - It won't build the Image every single time. It will only build it only if it doesn't find it. We will have to use
docker-compose buildto rebuild Images if we change 'em or we can usedocker-compose up --build. - This is great for complex builds that have lots of
varsorbuild args. Build ArgumentsareEnvironment Variablesthat are available only during Image builds.- Important commands:
docker-compose.yml docker-compose up docker-compose up --build docker-compose down docker image ls docker-compose down --help docker image rm nginx-custom docker image ls docker-compose up -d docker image ls docker-compose down --help docker-compose down --rmi local
Swarm Modeis aclusteringsolution built inside Docker.- Swarm Mode is not enabled by default in Docker.
- Its a new feature launched in 2016 (Added in
v1.12viaSwarmKit Toolkit) that brings together years of understanding the needs of Containers and how to actually run them live in production. - At it's core,
Swarmis actually aserver clusteringsolution that brings together different operating systems or hosts or nodes, into a single manageable unit that we can then orchestrate the lifecycle of our Containers in. - This is not related to
Swarm Classicforpre-1.12versions. Swarm Modeanswers following questions:- How do we automate Container lifecyle?
- How can we easily scale out/in/up/down?
- How can we ensure our Containers are re-created when they fail?
- How can we replace Containers without downtime (
blue/greendeployment)? - How can we control where Containers get started?
- How can we track where Containers get started?
- How can we create
cross-nodevirtual networks? - How can we ensure only trusted servers run our Containers?
- How can we store
secrets,keys,passwordsand get them to the right Container (and only that Container)?
- Once we enable
Swarm Mode, following are the set of new commands we can use:docker swarm docker node docker service docker stack docker secret
- When we're in a
Swarm, we cannot use animagethat's only on 1 node. ASwarmhas to be able to pullImageson all nodes from some repository in aregistrythat they can all reach.
+ How to check if swarm mode is activated and how to activate it- To check if
Swarmmode is activated or not- execute
docker info
- Look for
Swarm: inactive/active. For e.g. consider following output ofdocker infocommand:Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.8 Storage Driver: overlay2 Backing Filesystem: <unknown> Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive # Check for this one. Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.19.76-linuxkit Operating System: Docker Desktop OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 2.924GiB Name: docker-desktop ID: J2KP:ZPIE:5DLS:SLVA:RC2C:OJVX:7GK6:3T77:WY4G:XCXP:U4RB:2JV2 Docker Root Dir: /var/lib/docker Debug Mode: true File Descriptors: 36 Goroutines: 53 System Time: 2020-03-21T05:55:58.0263795Z EventsListeners: 3 HTTP Proxy: gateway.docker.internal:3128 HTTPS Proxy: gateway.docker.internal:3129 Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine
- execute
- To enable
Swarmmode:docker swarm init
+ What happen behind the scenes when we run docker swarm init?- It does lot of
PKIand security automation:Root Signing Certificatecreated for ourSwarmthat it will use to establishtrustandsigncertificates for allnodesand allmanagers.- Special
Certificateis issued for firstManager Nodebecause it's amanagervs. aworker. Join Tokensare created which we can actually use on othernodesto join thisSwarm.
Raft Consensus Databasecreated to storeroot CA,configsandsecrets.- Encrypted by default on disk (1.13+).
- No need for another
key/valuesystem to holdorchestration/secrets. - Replicates logs amoungst
Managersvia mutual TLS incontrol plane.
Raftis a protocol that actually ensures consistency across multiple nodes and it's ideal for using in the Cloud where we can't guarentee that any one thing will be available for any moment in time.- It creates
Raftdatabase on disk. Docker stores the configuration of theSwarmand thatfirst Manager, and it actually encrypts it. - Then it will waut for any other nodes before it starts actually replicating the database over to them.
- All of this traffic that it would be doing once we create other nodes, it all going to be encrypted.
- We don't need an additional key value storage system or some database architecture to be the backend configuration management of our
Swarm.
+ Key Concepts- A
Servicein aSwarmreplaces thedocker run. - There can only be one
Leaderat a time amoungst allmanagers. - To remove all Containers, we have to remove
Swarm Service.
+ Creating a 3-node Swarm Cluster- Following example demonstrates where we use multiple
hosts/nodes/instancesormultiple OS's. And we're going to setup a3-node Swarmacross all 3 of those nodes. - How we can try out and implement this setup first:
- http://play-with-docker.com
- Only needs a browser but resets after
4 hours.
- Only needs a browser but resets after
docker-machine + VirtualBox.- Free and runs locally, but requires a machine with
8gbmemory. - Comes default with
Docker for Win and Mac. - For
Linux, we will have to download explicitely and setup first.
- Free and runs locally, but requires a machine with
Digital Ocean + Docker Install.- Most like a
productionsetup, but costs$5 to $10per node per month. - They run everything on
SSDso it's nice and fast.
- Most like a
Roll our own.docker-machinecan provision machines forAmazon Instances,Azure Instances,Digital Ocean Droplets,Google Compute Nodesetc.- Install docker anywhere with
get.docker.com. - It is a tool to simply automate dev and test environments. It was never really designed to set up all of the production settings we might need for
multi-node Swarm.
- http://play-with-docker.com
- To experiment setting up
3-node Swarm Clusteron http://play-with-docker.com:- Go to http://play-with-docker.com.
- Launch 3 instances.
- On any 1 instance, execute:
# First execute below command, it will give error and display public ips available on eth0 and eth1. docker swarm init # Copy eth0 ip and specify it as a --advertise-addr docker swarm init --advertise-addr 192.168.0.6
- Copy the
docker swarm joincommand from there. - Go to other 2 nodes and paste
docker swarm joincommand. - Go to 1st node and execute following command to list out nodes:
docker node ls
- To promore
node2tomanager, execute following command onnode1(Leader):docker node update --role manager node2
- To make
node3join asmanagerby default, go tonode1and execute following command to getjoin token:docker swarm join-token manager
- Copy join command and execute it on
node3. - On
node1, execute `docker node ls to see status of swarm nodes. - Now to run Docker
servicewith3 replicasonalpineand ping 1 of the Google open DNS (8.8.8.8), execute following command onnode1:docker service create --replicas 3 alpine ping 8.8.8.8
- Execute:
docker service ps SERVICE_NAME # For e.g. docker service ps busy_hertz
- With
Swarmmode enabled, we get access to new networking driver calledoverlay. - To create a
networkusingoverlaydriver:# When we don't specify anything, default driver used is 'bridge'. docker network create --driver overlay # Or docker network create --driver overlay
+ Overlay Network Driver- It's like creating a
Swarmwidebridgenetowrk where the Containers acrosshostson the samevirtual networkcan access each other kind of like they're on aVLAN. - This driver is only for
intra-Swarm communication. i.e. Forcontainer-to-containertraffic inside a singleSwarm. - It acts as everything is like on the same
subnet. overlaynetwork is the only kind of network we could use in aSwarm. Becauseoverlayallows us to span across nodes as if they are all on thelocal network.- The
overlaydriver doesn't play a huge amount in traffic coming inside, as it's trying to take a wholisticSwarmview of the network so that we're not constantly messing around with networking settings on individual nodes. - We can also optionally enable full network encryption using
IPSec (AES)encryption on network creation.- It will setup
IPSec tunnelsbetween all the different nodes of ourSwarm. IPSec (AES) Encryptionis off by default for performance reasons.
- It will setup
- Each
servicescan be connected to multiplenetworks. For e.g. (front-end, back-end). - When we create our
services, we can add them to none of theoverlaynetworks, or one or moreoverlaynetworks. - Lot of traditional apps would have their back-end on the back-end network and front-end on the front-end network. THen maybe they would have an API between the two that would be on both networks. And we can totally do this in
Swarm.
+ Example: Drupal with Postgres as Services- Create an
overlaynetwork first:docker network create --driver overlay NETWORK_NAME # For e.g. docker network create --driver overlay mydrupal - Create a
Postgresservice onmydrupalnetwork:docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=adi123 postgres
- After running above command, we don't see image downloading and all that. That is because
Servicescan't be run in the foreground. Serviceshave to go throughorchestratorandscheduler. Execute following command to see and list outservices:
docker service ls
- After running above command, we don't see image downloading and all that. That is because
- To see details:
# To see the specific `service` details such as on which `node` it is running. docker service ps psql # To see the logs from Container: docker container logs psql.1.gfdsnjkfdjk3kbr3289d # Tab completion is available
- Create a
Drupalservice on samemydrupalnetwork:docker service create --name drupal --network mydrupal -p 80:80 drupal
- To see details:
# To see the specific `service` details such as on which `node` it is running. docker service ps drupal - Now we have database running on
node1and website running onnode2. They can talk to each other usingService Name.
Routing Meshis aStateless Load Balancer.Routing Meshload balancer acts atOSI Layer 3 (TCP)and not atLayer 4 (DNS).Routing Meshroutesingress (incoming)packets for aServiceto a properTask.- The
Routing Meshis anincomingoringressnetwork that distributes packets for ourserviceto theTasksfor thatservice, because we can have more than oneTask. - Spans all nodes in
Swarm. - It uses
Kernel PrimitivescalledIPVSfromLinux Kernel. Routing Meshload balancesSwarm Servicesacross theirTasks.- Two ways
Routing Meshworks:Container-to-Containerin anOverlay Network(usesVirtual IP (VIP)).Virtual IP (VIP)is somethingSwarmputs infront of allServices. It's a private IP inside thevirtual networkingofSwarm, and it ensures that the load is distributed amongst all theTasksfor aService.- External traffic incoming to published ports (all nodes listen).
- The benefit of
Virtual IP (VIP)overDNS Round Robinis that a lot of timesDNS Cacheinside our apps prevent us from properly distributing the load. - To run multiple websites on a same port, we could use:
Nginx Proxywhich is also known asHAProxy Load Balancer Proxy. This load balancer will act inOSI Layer 4 (DNS).Docker Enterprise Editioncomes with built-inOSI Layer 4 (DNS) Web Proxy. It is calledUCP or Docker Data Center.
Web Socketsdon't do well withRouting Mesh. That is becausesocketneeds persistent connection to a specific Container and because of load balancingRouting Meshkeeps switching between Containers. We could haveproxyinfront of it to make it work withWeb Sockets.
+ Docker service logs to see logs from different nodes- To see logs from different
docker services, execute:docker service logs NODE_NAME #For e.g. docker service logs adipostgres - If
loggingis not available, turn it on by enablingexperimental featuresof docker.// Open /etc/docker/daemon.json and specify following: {"experimental": true}
Stacksis another layer of abstraction added toSwarm.Swarmis basically aDocker Engineand it can acceptCompose Fileusingstackcommand.Swarmreads theCompose Filewithout needingDocker Composeanywhere on the server.- Basically it's a
Compose FileforSwarmin production. StacksacceptCompose Fileas their declarative definition forservices,networksandvolumes.- We use following command to deploy our
Stack:docker stack deploy
Stacksmanages all those objects for us, includingoverlaynetwork perStack. Also addsStack Nameto start of their name.- The key
deploy:is what we use in ourCompose File. It allows to specify things that are specific toSwarm. For e.g.- How many
replicasdo we want? - What we want to do when we
failover? - How do we want to do
rolling updates? - And all those sort of things that we wouldn't care about on our local development machine.
- How many
Stacks Config Filedoesn't allowBuilding. AndBuildingshould never ever happen onproduction.docker-composenow ignoresdeploy:key in Config File.Swarmignoresbuild:key in Config File.docker-composecli not needed onSwarm Server. It's not aproductiontool. It was designed to be a developer and sysadmin helper tool. It's best for local work.
+ How to deploy Swarm stack using compose file?- To deploy a
Swarmstack using Compose File:# '-c' option is for Compose File. docker stack deploy -c COMPOSE_FILE APP_NAME # For e.g. docker stack deploy -c adi-swarm-stack.yml myapp
- Easiest
securesolution for storingsecretsinSwarm. - Encrypted on disk and encrypted in transit as well.
- There are lots of other options like
Vaultavailable for storingsecrets. - Supports generic strings or binary content up to
500kbin size. - Doesn't require apps to be rewritten.
- From
Docker v1.13.0, theSwarm Raft Databaseis encrypted on disk. - It's only stored on the disk of the
managernodes and they're the only ones that have the keys to unlock it or decrypt it. - Default is
ManagersandWorkerscontrol planeisTLS + Mutual Auth. - Secrets are first stored in
Swarm(usingdocker secretscommand), then assigned to aService(s). - Only Containers in assigned
Service(s)can see them. - They look like files in Containers but are actually
in-memoryfilesystem. - On disk, they are located at:
/run/secrets/<secret_name> # OR /run/secrets/<secret_alias>
- Local
docker-composecan usefile-basedsecrets, but they are not secure.
+ What is a Secret?- Usernames and Passwords
TLScertificates and keys.- SSH Keys.
- Any data we would prefer not to be
on the front page of the news.
+ How to create a Secret?- There are 2 ways we can create a
secretinSwarm:- Create a text file and store value in it.
- Assume, we have a file
db_username.txtwith text contentaditya:> cat db_username.txt aditya - Now, to create a
secretfrom above file,docker secret create SECRET_NAME FILE_PATH # For e.g. docker secret create DB_USER db_username.txt - Running above command will spit out a key in return.
- Assume, we have a file
- Pass
secret valueat the command line.- To pass a
valueat command line and create asecretout of it:echo "myPasswordAdi123" | docker secret create DB_PASSWORD
- Running above command will spit out a key in return.
- To pass a
- Create a text file and store value in it.
+ How to decrypt a Secret?- Only
ContainersandServiceshave access to the decryptedsecrets. - For e.g.
# Demo # Create a service first. docker service create --name adidb --secret DB_USER --secret DB_PASS -e POSTGRES_PASSWORD_FILE=/run/secrets/DB_PASS -e POSTGRES_USER_FILE=/run/secrets/DB_USER postgres # List containers of 'adidb' service and copy the container name. docker service ps adidb # Get a shell inside Container. docker exec -it adidb.1.fbhdbj3738dh2 bash # 'adidb.1.fbhdbj3738dh2' is CONTAINER_NAME # Once we have the shell inside Container, list all secrets: ls /run/secrets/ # 'cat' contents of any secret file and it will display decrypted value. cat DB_USER
+ How to remove a Secret?- Only
ContainersandServiceshave access to the decryptedsecrets. - When we remove/add a
secret, it will stop the Container and redeploy a new one. This is not ideal for databaseservicesinSwarm. - To remove a
secretfromSwarm:# SSH/Get a shell inside Container. # List containers of 'adidb' service and copy the container name. docker service ps adidb # Get a shell inside Container. docker exec -it adidb.1.fbhdbj3738dh2 bash # 'adidb.1.fbhdbj3738dh2' is CONTAINER_NAME # To remove docker service update --secret-rm
- Swarm's update functionality is centered around a
rolling updatepattern for ourreplicas. - Provides
rolling replacementoftasks/containersin aservice. - In other words, if we have a
servicewith more than onereplica, it's going to roll through them by default, one at a time, updating eachContainerby replacing it with the new settings that we're putting in theupdate command. - Limits
downtime(be careful withpreventsdowntime). - Will replace
Containersfor most changes. - There are loads of
CLI options (77+)available to control theupdate. createoptions will usually change, adding-addor-rmto them.- Also includes
rollbackandhealthcheckoptions. - Also has
scaleandrollbacksubcommands for quicker access. For e.g.docker service scale web=4 # And docker service rollback web - If a
stackalready exists and if we dostack deployto a samestack, it will issueservice updates. - In
Swarm Updates, we don't have a differentdeploycommand. It's just samedocker stack deploy, with the file that we have edited, and it's job is to work with all of the other parts of theAPIto determine if there are any changes needed, and then roll those out with aservice update.
+ Swarm Update Examples- Just update the
imageto a newer version, that is already being used. We will have to useservice updatecommand:docker service update --image myapp:1.2.1 <SERVICE_NAME>
- Adding an
environmentvariable and remove aport. We will have to useservice updatecommand:docker service update --env-add NODE_ENV=production --publish-rm 8080
- Change number of
replicasof twoservices. We will have to useservice scalecommand:# Set number of `web` replicas to 8 and number of `api` replicas to 6. docker service scale web=8 api=6 Swarm Update, first edit theYAMLfile and then execute:docker stack deploy -c FILE_NAME.yml <STACK_NAME>
- Supported in
Dockerfile,Compose YAML,docker runandSwarm Services. - Docker engine will
exec's the command in the Container.- For e.g.
curl localhost.
- For e.g.
- Docker runs
Healthcheckcommand from inside the Container, and not from outside the Container. - It expects
exit 0 (OK)orexit 1 (Error). Healthcheckcommands are run every30 secondsby default.Healthcheckin Docker specifically has only 3states. Follwing are thestates:starting: first30 secondsby default, where it hasn't run ahealthcheckcommand yet.healthy.unhealthy.
- This is much better than
is binary still running?. - This isn't a external monitoring replacement. 3rd party monitoring tools provide much better insights including graphs and all.
+ Where do we see Docker Healthcheck status?- The
Healthcheck statusshows up indocker container ls. - We can check
last 5 healthcheckswithdocker container inspect. docker runcommand does not take action on anunhealthyContainer. Once thehealthcheckis considers a Containerunhealthy,docker runis just going to indicate that in thelsandinspectcommands.Swarm Serviceswill replacetasks/containersif they failhealthcheck.service updatecommands wait forhealthcheckbefore continuing.
+ Healthcheck Docker Run Example- Adding
healthcheckat runtime usingdocker runcommand:docker run \ --health-cmd="curl -f localhost:9200/_cluster/health || false" \ --health-interval=5s \ --health-retries=3 \ --health-timeout=2s \ --health-start-period=15s \ elasticsearch:2
+ Healthcheck in Dockerfile- Basic
HEALTHCHECKcommand inDockerfile:HEALTHCHECK curl -f http://localhost/ || false - Custom options with
HEALTHCHECKcommand inDockerfile:HEALTHCHECK --timeout=2s --interval=3s --retries=3 \ CMD curl -f http://localhost/ || exit 1 # `exit 1` is equivalent to `false`.
- All options for
healthcheckcommand inDockerfile:# How often it should check `healthcheck`. --interval=DURATION # Default 30s # How long it should wait before marking Container `unhealthy`. --timeout=DURATION # Default 30s # When should it fire first `healthcheck` command. This gives us # ability to specify longer wait period than first 30 seconds. --start-period=DURATION # Default 0s. # Max number of times it should run `healthcheck` before # it marks that container `unhealthy`. --retries=N # Default 3.
- An image registry needs to be part of our
Container Plan. Docker Storeis different thanDocker Hub.Docker Cloudis different thanDocker Hub.
+ Docker Hub- It is the most popular public
Imageregistry. Docker Hubis aDocker RegistryplusLightweight Image Building.- It provides 1 free
private repository. We have to pay for there on wards. - We can make use of
webhooksto make ourrepositorysendwebhook notificationto services likeJenkins,Codeship,Travis CIor something like that, to have automated builds continue down the lines. Webhooksare there to help us automate the process of getting our code all the way from something likeGitorGithubtoDocker Huband all the way to our servers where we want to run them.Collaboratorsare where we provide permissions for other users to ourImage.
+ Running Docker Registry- Using
Docker Registry, we can run aprivateImage registry for ournetwork. - It's a part of
Docker/distribution GitHub Repo. - The de facto in private container registries.
- Not as full featured as
Docker HuborOthers, no web UI, basic auth only. - At it's core, it's just a web API and storage system, written in
Go Lang. - Storage supports
local,S3,Azure,Alibaba,Google CloudandOpenStack Swift. - We should secure our registry with
TLS (Transport Layer Security). - Storage cleanup via
Garbage Collection. - Enable
Docker Hub Cachingvia--registry-mirroroption.
+ Running A Private Docker Registry- Run the registry image on default port
5000. - Re-tag an existing Image and push it to our new
registry. - Remove that Image from our local cache and pull it from new
registry. - Re-create
registryusing abind mountand see how it stores data. - Following commands demonstrate
How to run private Docker registry:docker container run -d -p 5000:5000 --name registry registry docker container ls docker image ls docker pull hello-world docker run hello-world docker tag hello-world 127.0.0.1:5000/hello-world docker image ls docker push 127.0.0.1:5000/hello-world docker image remove hello-world docker image remove 127.0.0.1:5000/hello-world docker container rm admiring_stallman docker image remove 127.0.0.1:5000/hello-world docker image ls docker pull 127.0.0.1:5000/hello-world:latest docker container kill registry docker container rm registry docker container run -d -p 5000:5000 --name registry -v $(pwd)/registry-data:/var/lib/registry registry TAB COMPLETION docker image ls docker push 127.0.0.1:5000/hello-world
+ Registry And Proper TLSSecure by Default: Docker won't talk to registry withoutHTTPS.- Except,
localhost (127.0.0.0/8). - For remote
self-signed TLS, enableinsecure-registryoption in engine.
+ Private Docker Registry In Swarm- Works the same way as localhost.
- Only difference is we don't have to run
docker runcommand. We have to rundocker servicecommand or astack file. - Because of
Routing Mesh, all nodes can see127.0.0.1:5000. - We don't have to enable
insecure registrybecause that's already enabled byDocker Engine. - Remember to decide how to store Images (volume driver).
- When we're in a
Swarm, we cannot use animagethat's only on one node. ASwarmhas to be able to pullImageson all nodes from some repository in aregistrythat they can all reach. - Note: All nodes must be able to access
images.
Pro Tip: Use a hosted
SaaS Registryif possible.
- Following commands demonstrate
How to run Private Docker Registry In Swarm?:- Go to https://labs.play-with-docker.com.
- Start session and click on wrench/spanner icon and launch
5 Managers And No Workerstemplate. - Commands:
# http://play-with-docker.com docker node ls docker service create --name registry --publish 5000:5000 registry docker service ps registry docker pull hello-world docker tag hello-world 127.0.0.1:5000/hello-world docker push 127.0.0.1:5000/hello-world docker pull nginx docker tag nginx 127.0.0.1:5000/nginx docker push 127.0.0.1:5000/nginx docker service create --name nginx -p 80:80 --replicas 5 --detach=false 127.0.0.1:5000/nginx docker service ps nginx
+ What is KubernetesKubernetesis a popular Container orchestrator.Container Orchestrationis Make many servers act like one.Kuberneteswas released in 2015 by Google Inc. and now it is maintained by open source community which Google Inc. is also part of.Kubernetesruns on top of Docker (usually) as a set of APIs in Containers.Kubernetesprovides set ofAPIsorCLIto manage Containers across servers/nodes.- Like in Docker we were using
dockercommand a lot, inKuberneteswe usekubectl (kube control)command. kubectlis also referred to asKube Controltool orKube Cuddletool orKoob Controletc. but the standard name from official repo is nowKube Control.- Many cloud vendors provide
Kubernetesas a service to run our Containers. - Many vendors make a
distributionofKubernetes. It's similar to the concept oflinux distribution. For e.g. samelinux kernelis running of differentdistributionsoflinux. - In short,
Kubernetesis a series of Containers, CLI's and configurations.
+ Why Kubernetes- Not every solution needs orchestration.
- Simple formula whether or not to use orchestration:
- Take the
number of serversthat we need for particular environment and then thechange rateof our applications or the environment itself. The multiplication of those 2 is equals to the benefit of Orchestration.
- Take the
- If our application is changing only 1ce a month or less, then the Orchestration and the efforts involved in deploying it, managing it, securing it, may be unnecessary at this state. Especially if we're a solo developer or just a very small team. That's where things like
Elastic Beanstalk,Herokuetc. start to shine as alternatives to doing our own Orchestration. - Carefully decide which Orchestration platform we need.
- There are Cloud specific Orchestration platforms like
AWS ECS. It is more traditional offering that have been around a little bit longer likeCloud Foundry,MesosandMarathon. - If we're concerned about running Containers on premise, and in the Cloud or potentially multi-Cloud, then we may not want to go with those Cloud specific offerings like
ECS. Because those were around beforeKuberneteswas. So, that was sort of a legacy solution that Amazon still supports and it's still a neat option, but only if we're specific toAWSand that's the only place we ever plan to deploy Containers. Swarmandkubernetesare most popular Container Orchestrators that run on every Cloud, and in data centers, and even small environments possibly likeIoT.- If we decide on
Kubernetesas our Orchestrator then next big decision comes down towhich distribution we are going to use?- First part of this decision is to figure out if we want a Cloud Managed solution or if we want to roll our own solution with a vendor's product that we would install on the servers ourselves.
- Some of the common distributions that are vendor supported are
Docker Enterprise,Rancher,OpenShift from RedHat,Canonical from Ubuntu Company,PKS from VMwareetc. Check out this list of Kubernetes Certified Distributors - We probably don't usually need pure upstream version of
GitHub's Kubernetes.
+ Kubernetes vs. SwarmKubernetesandSwarmboth solve similar problems. THey are both Container orchestrators that run on top of a Container runtime. There are different Container runtimes likeDocker,Containerd,CRI-O,fraktietc.Dockeris #1.KubernetesandSwarmboth are solid platforms with vendor backing.Swarmis easier todeploy/manage.Kuberneteshas more features and flexibility. It can solve more problems in more ways and also has wide adoption and support.- Advantages Of
Swarm:- Comes with Docker, single vendor Container platform i.e Container runtime and the Orchestrator both built by the same company (Docker).
- Easiest orchestrator to deploy/manage ourselves.
- Follows 80/20 rule i.e. 20% of features for 80% of use cases.
- Runs anywhere where Docker can run:
- local, cloud, datacenter
- ARM, Windows, 32-bit
- Secure by default.
- Easier to troubleshoot because there are less moving parts in it and less things to manage.
- Advantages Of
Kubernetes:- Clouds will deploy/manage
Kubernetesfor us. Has the widest Cloud and vendor support. - Now a days, even Infrastructure vendors like
VMware,Red Hat,NetAppetc. are making their own distributions. - Widest adoption and community.
- Flexible: Covers widest set of use cases.
Kubernetes Firstvendor support.No one ever got fired for buying IBM.- i.e. Picking solution isn't 100% rational.
- Trendy, will benefit our career.
- CIO/CTO Checkbox.
- Clouds will deploy/manage
+ Kubernetes Installation- Docker Desktop:
- Best one! It provides many things out of the box.
- Enable
Kubernetesin Docker's settings and installation is done! - Sets up everything inside Docker's existing
Linux VM. - Runs/Configures
Kubernetes Master Containers. - Manages
kubectlinstall andcerts. Docker Desktop Enterprise (paid)allows us to swap between different versions ofKuberneteson the fly.
- Docker Toolbox on Windows:
MiniKube- If we're using
Docker Toolbox on Windowsthen we should useMiniKube. - We don't even need
Docker Toolboxinstalled. We can runMiniKubeseparately. - Download
MiniKubewindows installerminikube-installer.exefrom GitHub. - Type
minikube startin shell after installation. MiniKubehas similar todocker-machineexperience.- Creates a
VirtualBox VMwithKubernetesmaster setup. - We separately have to install
kubectlfor Windows.
- If we're using
- Linux or Linux VM in Cloud:
MicroK8s- If we are using
Linux OSor anyVMwithLinuxon it, we should useMicroK8s. MicroK8sis made byUbuntu.MicroK8sinstallsKubernetesright on the OS.MicroK8sinstallsKubernetes(without Docker Engine) orlocalhost(Linux).- Uses
snap(rather thanaptoryum) for install. So we have to installsnapfirst on our Linux OS.snapcan be installed usingapt-getoryum. - Control the
MicroK8s serviceviamicrok8s.commands. For e.g.microk8s.enablecommand. kubectlaccessible viamicrok8s.kubectl. It's better to add alias for this command in ourbash/zshprofile:alias kubectl=microk8s.kubectl.
- If we are using
- Kubernetes In A Browser:
- Easy to get started.
- Downside is it doesn't keep our environments. They are not saved.
- Try https://labs.play-with-k8s.com
- Or try https://www.katacoda.com/courses/kubernetes/playground
+ Kubernetes Architecture TerminologyKubernetes:- The whole orchestration system.
- Shortly mentioned as
K8sorKube.
Kubectl:Kubectlstands forKube Control.- It's a CLI to configure
Kubernetesand manage apps.
Node:Nodeis a single server in theKubernetes Cluster.
Kubelet:Kubeletsare theKubernetes Agentrunning on nodes.
Control Plane:- Sometimes called as
master. Control Planeare the set of Containers that manage thecluster.Control PlaneincludesAPI Server,Scheduler,Controller Manager,etcd,coreDNSand more..
- Sometimes called as
Kube-Proxy:- It's for networking in
Control Plane..
- It's for networking in
+ Kubernetes Container Abstractions- Pod: One or more Containers running together on one
Node.Podis the basic unit of deployment.- Containers are always in
Pod. - We don't deploy Containers independently. Instead, Containers are inside
Podsand we deployPods.
- Controller: For creating/updating
Podsand other objects.Controllersare on top ofPodsand we use them for creating/updatingPodsand other objects.- It's a differencing engine that has many types.
- There are many types of
Controllerssuch as:Deployment Controller.ReplicaSet Controller.StatefulSet Controller.DemonSet Controller.Job Controller.CronJob Controller.- And lot more..
- Service: The
Serviceis a little bit different inKubernetesthan it is inSwarm.- A
Serviceis specifically the endpoint that we give to a set ofPods, often when we use aControllerFor e.g.deployment controllerto deploy a set ofreplica pods, we would then set aserviceon that. Servicemeans a persistent endpoint in theclusterso that everything else can acess that set ofPodsat a specificDNS nameandport.
- A
- Namespace: Filtered group of objects in a
cluster.- It's an optional, advanced feature.
- It's a simply a filter for our views when we are using
kubectlcommand line. - For e.g. When we are using Docker desktop, it defaults to the
default namespaceand filters out all of the system containers runningKubernetesin the background. Because normally, we don't want to see them containers when working withkubectlcommand line.
- There are many other things to
Kubernetessuch as:Secrets.ConfigMaps.- And more..
+ Kubernetes Run, Create and Applykubectl run: This command is changing to be only forPodcreation.kubectl create: Create some resources via CLI or YAML.kubectl apply: Create/update anything via YAML.
+ Creating First Pods - nginx- Create:
kubectl version kubectl run my-nginx --image nginx kubectl get pods kubectl get all
- Cleanup
kubectl get pods kubectl get all kubectl delete deployment my-nginx kubectl get all
+ Scaling Replica Sets - Apache Httpd- Create:
kubectl run my-apache --image httpd # 'run' gives us a single 'pod' or 'replica' kubectl get all - Scale
# Use either below command to scale: kubectl scale deploy/my-apache --replicas 2 # Or use following command: # kubectl scale deployment my-apache --replicas 2 # Both above and this command are same. kubectl get all
+ Inspecting Kubernetes Objects - Apache Httpd- Create:
kubectl run my-apache --image httpd # 'run' gives us a single 'pod' or 'replica' kubectl scale deploy/my-apache --replicas 2 # Scale it to 2 replica sets. kubectl get all
- Inspect Kubernetes Objects Commands
kubectl get pods # Get Container logs kubectl logs deployment/my-apache kubectl logs deployment/my-apache --follow --tail 1 kubectl logs -l run=my-apache # '-l' is for label. # Get bunch of details about an object, including events! kubectl get pods kubectl describe pod/my-apache-<pod id> # Watch a command (without needing 'watch') kubectl get pods -w # Run this command in one terminal window. kubectl delete pod/my-apache-<pod id> # Run this command in another terminal window. kubectl get pods # Run this command in another terminal window.
- Cleanup
kubectl delete deployment my-apache
- A
serviceis a stable address forpod(s). - If we want to connect to
pod(s), we need aservice. kubectl exposecommand creates aservicefor existingpods.CoreDNSallows us to resolveservicesby theirnames.- But this only works for services in a same
namespace. To get allnamespaces, runkubectl get namespaces.
- But this only works for services in a same
Servicesalso hasFQDN (Fully Qualified Domain Name).- We can do CURL hit service with
FQDNas below:
curl <hostname>.<namespace>.svc.cluster.local
- We can do CURL hit service with
- There are four different types of
servicesinKubernetes:ClusterIPNodePortLoadBalancerExternalName
ClusterIPandNodePortare theserviceswhich are always available inKubernetes.- There's one more way, external traffic can get inside our
Kubernetes- It is calledIngress. - Following 3 service types are additive, each one creates the ones above it:
ClusterIPNodePortLoadBalancer
+ Kubernetes Services - ClusterIP (default)- It's only available in the
cluster. - This is about one set of
Kubernetes Podstalking to another set ofKubernetes Pods. - It gets it's own
DNSstress. That's going to be theDNSaddress in thecore DNScontrol plane. - Single, internal virtual IP allocated. In other words, it's going to get an IP address in that virtual IP address space inside the cluster. And that allows our other
podsrunning in the cluster to talk to thisserviceusing the port of theservice. - Only reachable from within
cluster (nodes and pods). Podscan reach service on apps port number.- Following commands are useful for creating
ClusterIPservice:kubectl get pods -w kubectl create deployment httpenv --image=bretfisher/httpenv kubectl scale deployment/httpenv --replicas=5 kubectl expose deployment/httpenv --port 8888 kubectl get service kubectl run --generator run-pod/v1 tmp-shell --rm -it --image bretfisher/netshoot -- bash curl httpenv:8888 curl [ip of service]:8888 kubectl get service
+ Kubernetes Services - NodePort- When we create a
NodePort, we're going to get aHigh Porton eachnodethat's assigned to thisservice. - Port is open on every
node's IP. - Anyone can connect (if they can reach
node). NodePortservice also creates aClusterIPinternally.
+ Kubernetes Services - LoadBalancer- This
serviceis mostly used inClouds. - It controls a
LBendpoint external to the cluster. - When we create
LoadBalancerservice, it will automatically createClusterIPandNodePortservices internally. - Only available when
infraprovider gives us aLB (e.g. AWS ELB etc). - Creates
ClusterIPandNodePortservices and then tellsLBto send toNodePort. - Following commands are useful for creating a
NodePortandLoadBalancerservice:kubectl get all kubectl expose deployment/httpenv --port 8888 --name httpenv-np --type NodePort kubectl get services curl localhost:32334 kubectl expose deployment/httpenv --port 8888 --name httpenv-lb --type LoadBalancer kubectl get services curl localhost:8888 kubectl delete service/httpenv service/httpenv-np kubectl delete service/httpenv-lb deployment/httpenv
+ Kubernetes Services - ExternalName- This
serviceis used less often. - Adds
CNAME DNSrecord toCoreDNSonly. - Not used for
Pods, but for givingPodsaDNS Nameto use for something outsideKubernetes.
+ Run, Create, Expose GeneratorsGeneratorsare like templates. They essentially create thespecor specification to apply toKubernetes Clusterbased on our command line options.- Commands like
Run,Create,Exposeetc. use helper templates calledGenerators. - Every resource in
Kuberneteshas a specification orspec. For e.g.kubectl create deployment aditest --image nginx --dry-run -o yaml
- We can output these templates with
--dry-run -o yaml. We can use theseYAML defaults as a startin point. - Generators are
opinionated defaults.
+ Generators Example- Using dry-run with yaml output we can see the generators.
- Examples:
kubectl create deployment aditest --image nginx --dry-run -o yaml kubectl create job aditest --image nginx --dry-run -o yaml # We need the deployment "aditest " to exist before below command works. kubectl expose deployment/aditest --port 80 --dry-run -o yaml
+ Imperative vs. Declarative- Imperative: Focus on how a program operates.
- Declarative: Focus on what a program should accomplish.
- For e.g. Coffee
- Imperative: I will boil water, scoop out 42 grams of medium-fine grounds, pour over 700g of water, etc.
- Declarative: Barista, I would like a cup of coffee.
- Barista is a engine that works through the steps, including retrying to make a cup of coffee, and is only finished when I have a cup of coffee.
+ Imperative Kubernetes- Examples:
kubectl run,kubectl create deployment,kubectl update.- We start with a state we know (no deployment exist).
- We ask
kubectl runto create a deployment.
- Different commands are required to change that deployment.
- Different commands are required per object.
- Imperative is easier to get started.
- Imperative is easier for humans at the CLI.
- Imperative is easier when we know the state.
- Imperative is not easy to automate
+ Declarative KubernetesDeclarativemeans we don't know thestate, we just know theend resultthat we want.- Example:
kubectl apply -f my-resources.yml
- We don't know the current state.
- We only know what we want the end result to be (yaml contents).
- Same command each time (tiny exception for delete).
- Resources can be in a single file, or multiple files (apply a whole dir).
- Requires understanding the YAML keys and values.
- More work than
kubectl runfor just starting aPOD. - The easiest way to automate our orchestration.
- The eventual path to GitOps happiness.
+ Three Management Approaches- Imperative Commands: run, expose, scale, edit, create deployment etc.
- Best for dev/learning/personal projects.
- Easy to learn, hardest to manage over time.
- Imperative Objects:
create -f file.yml,replace -f file.yml,delete...- Good for
prodof small environments, single file per command. - Store our changes in git-based yaml files.
- Hard to automate.
- Good for
- Declarative Objects:
apply -f file.ymlordir\ordiffetc.- Best for prod, easier to automate.
- Harder to understand and predict changes.
- MOST IMPORTANT RULES:
- Don't mix 3 approaches when we have true production dependency.
- Store yaml in Git, Git Commit each change before we apply.
+ Using kubectl apply- Create/Update resource in a file:
kubectl apply -f myfile.yml
- Create/Update a whole directory of yaml:
kubectl apply -f adiYamls/
- Create/Update from a URL:
kubectl apply -f https://aditya.io/pod.yml
+ Kubernetes Configuration YAML- Kubernetes Configuration File (YAML or JSON).
- A full description of a resource in Kubernetes is a
manifest. - Each file contains one or more
manifests. - Each
manifestdescribes anAPI Object(for e.g. deployment, job, secret). - Each
manifestneeds four parts (root/mainkey:valuesin a file). They are:apiVersionkindmetadataspec
+ How To Build YAML File- kind: We can get a list of resources the cluster supports:
kubectl api-resources
- apiVersion: We can get a list of api versions the cluster supports:
kubectl api-versions
- metadata: Only name is required.
- spec: Where all the action is at!
- We can get all the
keysforspecby running following command:kubectl explain services.spec
- We can get all the
keysfor a specifickeyinspecby running following command:kubectl explain services.spec.<TYPE>
- We can get all the
keyseachkindsupports:kubectl explain services --recursive
- We can get all the
- sub spec: Can have sub spec of other resources.
- We can get all the
keysfor subspecof any resource by running following command:kubectl explain deployment.spec.template.spec.volumes.nfs.server
- We can get all the
+ Dry Runs With Apply YAML- Client Side Only dry run:
kubectl apply -f app.yml --dry-run
- Server Side dry run:
kubectl apply -f app.yml --server-dry-run
- To See Diff Visually:
kubectl diff -f app.yml
+ Labels And AnnotationsLabelsgoes undermetadatain YAML.- They are optional.
- Simple list of
key:valuefor identifying our resource later byselecting,groupingorfilteringfor it. - Common examples:
env: prodtier: frontendapp: apicustomer: aditya.com
+ What is Kubernetes- Kubernetes is a container orchestration tool that is used for automating the tasks of managing, monitoring, scaling and deployment of containerized applications.
- It creates groups of containers that can be logically discovered and managed for easy operations on containers.
+ Difference between Docker Swarm and Kuberentes- Docker Swarm is a default container orchestration tool that comes with Docker.
- Docker Swarm can only orchestrate simple Docker Containers.
- Kuberenetes helps manage much more complex software application containers.
- Kuberenetes offers support for larger demand production environment.
- Docker Swarm can't do auto-scaling.
- Docker Swarm doesn't have a GUI.
- Docker Swarm can deploy rolling updates but can't deploy automatic rollbacks.
- Docker Swarm requires third-party tools like ELK stack for logging and monitoring, while Kuberenetes has integrated tools for the same.
- Docker Swarm can share storage volumes with any containers easily, while Kubernetes can only share storage volumes with containers in the same pod.
+ What is a Heapster?- The Heapster lets us do the container cluster monitoring.
- It lets us do cluster-wide monitoring and event data aggregation.
- It has native support for Kubernetes.
+ What is a kubelet?- The kubelet is a service agent that controls and maintains a set of pods by watching for pod specs through the Kubernetes API server.
- It preserves the pod lifecycle by ensuring that a given set of containers are all running as they should.
- The kubelet runs on each node and enables the communication between the master and slave nodes.
+ What is a kubectl?- Kubectl is a Kuberentes command line tool that is used for deplying and managing applications on Kubernetes.
- Kubectl is specially useful for inspecting the cluster resources, and for creating, updating and deleting the components.
+ What are the different types of Kubernetes services?- Cluster IP Service.
- It is a default type of service whener we create a service and not specify what it would be.
- Cluster IP type service can only be reached internally within the cluster.
- It is not exposed to the outside world.
- Node Port Service.
- It exposes the service on each node's ip at a static port.
- Cluster IP service is created automatically and a Node Port service will route to it.
- External Name.
- Maps the service to the contents of the external name field.
- It does it by returning the value for CNAME record.
- Load Balancer Service.
- It exposes the service externally using the load balancer of our cloud provider.
- External load balancer routes the Node Port and Cluster IP service which are created automatically.
+ How to set a static IP for Kubernetes Load Balancer?- Kubernetes Master assigns a new IP address.
- We can set a static IP for Kubernetes load balancer by changing the DNS records whenever Kuberenetes Master assigns a new IP address.
+ What is ETCD?- Kubernetes uses ETCD as a distributed key-value store for all of its data, including metadata and configuration data, and allows nodes in Kuberenetes clusters to read and write data.
- ETCD represents the state of a cluster at a specific moment in time and is a center for state management and cluster coordination of a Kubernetes cluster.
+ Can we use many claims out of a persistent volume?- Answer = NO!
- The mapping between
persistentVolumeandpersistentVolumeClaimis always one to one. - Even when we delete the claim,
persistentVolumestill remains as we setpersistentVolumeReclaimPolicyis set toRetainand it will not be resused by any other claim.
+ How do you deploy a feature with zero downtime in Kuberenetes?- Short answer: We can apply rolling updates.
- In Kuberenetes, we can define update strategy in deployment.
- We should put
RollingUpdateas a strategy to ensure no down time.
+ What is a difference between replication controller and replica sets?- Replication controllers are obsolute now in the latest versions of Kuberenetes.
- The only difference between the Replication Controller and Replica Sets is the Selectors.
- Replication Controller don't have Selectors in their specs.
+ What is a a Kube-Proxy?- Kube-Proxy runs on each of the nodes.
- It is responsible for directing traffic to the right container based on IP and the port number of incoming request.
+ What is a Headless Service?- It is similar to normal service but it doesn't have a Cluster IP.
- It enables us to directly reach the pods without the need of accessing it through a proxy.
+ What is a PVC (Persistent Volume Claim)?- It's a storage requested by Kubernetes for pods.
- The user is not required to know the underlying provisioning.
- This claim should be created in the same namespace where the pod is created.
+ What are the different components in Kuberenetes architecture?- Master Node:
- API Server:
- REST API used to manage and manipulate the cluster.
- Controller Manager:
- Daemon responsible for regulating the cluster in Kubernetes and manage non-terminating control loops.
- Scheduler:
- Responsible for scheduling tasks on worker node. It also keeps resources utilization data for each of the slave nodes.
- ETCD:
- Distributed Key-Value storage where we have shared configurations. It is also used for service discovery. It stores all the information about current situation of what the cluster looks like.
- API Server:
- Worker Node:
- Kubelet:
- It's job is to get the configuration of pods from the API server and ensure everything is running according to that.
- Kube-Proxy:
- Behaves like a network proxy as well as the load balancer for service on a worker node. It directs traffic to particular node based on the ip and port number on incoming request.
- Pod:
- Smallest unit in the Kubernetes eco-system. It can have one or more containers which can logically run on different nodes.
- Container:
- Runs on Pod.
- Kubelet:
+ How to pass sensitive information in cluster?- We can pass sensitive information in Kubernetes using Secrets.
- Secrets can be created using YAML and TXT files.
+ What is a Sematext Docker Agent?- It is a log collection agent with events and metrics.
- It runs as a small container in each Docker host.
- These agents gather metrics, events, and logs for all cluster nodes and containers.
+ Can we make sure that pod gets scheduled on a specific worker node?- Pod gets scheduled on worker nodes automatically.
- To spawn a pod on a particular worker node, we can use taints and tolerations.
+ Running 3 Containers: nginx (80:80), mysql (3306:3306), httpd (Apache Server - 8080:80)- MySQL:
- To run
MySQL:docker container run -d -p 3306:3306 --name db -e MYSQL_RANDOM_ROOT_PASSWORD=yes mysql
- View logs to see generated random password for the
rootuser. To view logs:docker container logs db # 'db' is the name we have given in above command. - Look for something like below line for the generated random password:
2020-03-14 12:20:33+00:00 [Note] [Entrypoint]: GENERATED ROOT PASSWORD: ChooHxafdsasd2dsx1ouovo7aegha
- To run
- httpd (Apache Server):
- To run
httpd:docker container run -d -p 8080:80 --name webserver httpd
- To run
- nginx:
- To run
nginx:docker container run -d -p 80:80 --name proxy nginx
- To run
- All containers are running by now. To list all running containers:
docker container ls # OR docker ps # Old command
- To clean these containers up:
docker container stop # Press TAB to get a list of all running containers # OR docker container stop proxy webserver db
- To remove these containers:
docker container ls -a # This will give list of all containers, even stopped ones. # To remove containers, specify their ids like below: docker container rm b520f9b00f89 5eaa2a2b09c6 c782914b7c66
- To remove
Imagesas well:# To remove images, specify their ids like below: docker image rm b520f4389 5eaa22b09c6 c782432b7c66
+ To clean up apt-get cache- By cleaning up
apt-getcache in Containers, we get to keep our Image size small. - It's a best practice to clear up
apt-getcache once the required packages are installed. - Following command cleans up
apt-getcache and it is used across many popular Images same way:rm -rf /var/lib/apt/lists/* - FOR EXAMPLE: After installing
gitin Image, we should clean upcacheto save almost10mbspace:# Below command installs 'git' and clears cache after installation. RUN apt-get update && apt-get install -y git \ && rm -rf /var/lib/apt/lists/*
+ To get a Shell inside Container- To start new Container interactively:
docker container run -it
- To run additional command in existing container:
docker container exec -it - For e.g. To start a
httpdcontainer interactively and get abashshell inside it:docker container run -it --name webserver httpd bash
- For e.g. To get a
bashshell inside already runningnginxContainer named asproxy:docker container exec -it proxy bash
+ To create a temp POD in cluser and get an interactive shell in it- This command will create a temporary
PODin a running cluser and launch an interactive shell inside it. - NOTE: This temporary
PODwill be deleted once we exit out of the shell.kubectl run --generator run-pod/v1 tmp-shell --rm -it --image bretfisher/netshoot -- bash
+ Docker Swarm - Create Our First Service and Scale it Locally- To create a Docker Swarm service and scale it locally, following are the useful commands:
docker info docker swarm init docker node ls docker node --help docker swarm --help docker service --help docker service create alpine ping 8.8.8.8 docker service ls docker service ps frosty_newton docker container ls docker service update TAB COMPLETION --replicas 3 docker service ls docker service ps frosty_newton docker update --help docker service update --help docker container ls docker container rm -f frosty_newton.1.TAB COMPLETION docker service ls docker service ps frosty_newton docker service rm frosty_newton docker service ls docker container ls
+ Creating a 3-Node Swarm Cluster- To create a 3-Node Swarm Cluster, following are the useful commands:
# http://play-with-docker.com docker info docker-machine docker-machine create node1 docker-machine ssh node1 docker-machine env node1 docker info # http://get.docker.com docker swarm init docker swarm init --advertise-addr TAB COMPLETION docker node ls docker node update --role manager node2 docker node ls docker swarm join-token manager docker node ls docker service create --replicas 3 alpine ping 8.8.8.8 docker service ls docker node ps docker node ps node2 docker service ps sleepy_brown
+ Scaling Out with Overlay Networking- Following set of commands orchestrate scaling out with overlay networking:
docker network create --driver overlay mydrupal docker network ls docker service create --name psql --netowrk mydrupal -e POSTGRES_PASSWORD=mypass postgres docker service ls docker service ps psql docker container logs psql TAB COMPLETION docker service create --name drupal --network mydrupal -p 80:80 drupal docker service ls watch docker service ls docker service ps drupal docker service inspect drupal
+ Scaling Out with Routing Mesh- Following set of commands orchestrate scaling out with Routing Mesh:
docker service create --name search --replicas 3 -p 9200:9200 elasticsearch:2 docker service ps search
+ Create a Multi-Service Multi-Node Web App- Following set of commands orchestrate creation of a Multi-Service Multi-Node Web App:
docker node ls docker service ls docker network create -d overlay backend docker network create -d overlay frontend docker service create --name vote -p 80:80 --network frontend -- replica 2 COPY IMAGE docker service create --name redis --network frontend --replica 1 redis:3.2 docker service create --name worker --network frontend --network backend COPY IMAGE docker service create --name db --network backend COPY MOUNT INFO docker service create --name result --network backend -p 5001:80 COPY INFO docker service ls docker service ps result docker service ps redis docker service ps db docker service ps vote docker service ps worker cat /etc/docker/ docker service logs worker docker service ps worker
+ Swarm Stacks and Production Grade Compose- Following set of commands demonstrate Swarm Stacks and Production Grade Compose:
docker stack deploy -c example-voting-app-stack.yml voteapp docker stack docker stack ls docker stack ps voteapp docker container ls docker stack services voteapp docker stack ps voteapp docker network ls docker stack deploy -c example-voting-app-stack.yml voteapp
+ Using Secrets in Swarm Services- Useful commands:
docker secret create psql_usr psql_usr.txt echo "myDBpassWORD" | docker secret create psql_pass - TAB COMPLETION docker secret ls docker secret inspect psql_usr docker secret create --name psql --secret psql_user --secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRES_USER_FILE=/run/secrets/psql_user postgres docker service ps psql docker exec -it psql.1.CONTAINER NAME bash docker logs TAB COMPLETION docker service ps psql docker service update --secret-rm
+ Using Secrets with Swarm Stacks- Useful commands:
vim docker-compose.yml docker stack deploy -c docker-compose.yml mydb docker secret ls docker stack rm mydb
+ Create A Stack with Secrets and Deploy- Useful commands:
vim docker-compose.yml docker stack deploy - c docker-compose.yml drupal echo STRING |docker secret create psql-ps - VALUE docker stack deploy -c docker-compose.yml drupal docker stack ps drupal
+ Service Updates: Changing Things In Flight- Useful commands:
docker service create -p 8088:80 --name web nginx:1.13.7 docker service ls docker service scale web=5 docker service update --image nginx:1.13.6 web docker service update --publish-rm 8088 --publish-add 9090:80 web docker service update --force web
+ Healthchecks in Dockerfile- Useful commands:
docker container run --name p1 -d postgres docker container ls docker container run --name p2 -d --health-cmd="pg_isready -U postgres || exit 1" postgres docker container ls docker container inspect p2 docker service create --name p1 postgres docker service create --name p2 --health-cmd="pg_isready -U postgres || exit 1" postgres