What is Docker? What is a container? Why are they important?
Docker is like a magic box for software. It holds everything an app needs to run, like the app itself, tools, and libraries. It's like a lunchbox for your software.
Software is often a mix of different parts, like web servers, databases, and other tools. When people build software, they sometimes face problems, like making sure it works on different computers.
Docker containers help with this. They keep everything together, so the software runs the same everywhere. It's like a lunchbox that keeps your food fresh.
So, Docker makes it easier for developers to create, share, and run their software. It's a helpful tool for building and managing software.
Docker
Docker is a suite of platform-as-a-service products that utilize OS-level virtualization to distribute software within self-contained units called containers. These containers share the underlying operating system kernel but are isolated from one another. Each container includes its own software, libraries, and configuration files.
Docker is known for its swift(Fast) startup times.
In Docker, an 'image' serves as a preconfigured template.
Containers, in contrast, represent active instances of these images. They run in isolation, maintaining their own environments and sets of processes
What is Orchestration? Why Do We Need Container Orchestration?
- Orchestration in system administration involves the automated configuration, coordination, and management of computer systems and software. Numerous tools are available for automating server configuration and management.
In container orchestration, the focus is on monitoring application load and performance while ensuring the swift deployment of new containers in case of failures.
Container orchestration automates various aspects of coordinating and managing containers, primarily concentrating on the lifecycle of containers and their dynamic environments.
Container orchestration serves to automate the following tasks at scale:
Configuring and scheduling containers
Provisioning and deploying containers
Managing container availability
Configuring applications based on the containers they run in
Scaling containers to evenly distribute application workloads across infrastructure
Allocating resources among containers
Handling load balancing, traffic routing, and service discovery for containers
Monitoring the health of containers
Ensuring secure interactions between containers
Container orchestration comprises a set of tools and scripts that aid in hosting and managing containers.
There are multiple container orchestration options available today, including Docker Swarm, Kubernetes, Mesos (Apache), and cloud solutions.
Kubernetes
Kubernetes is an open-source container orchestration system that automates application deployment, scaling, and management.
- Kubernetes is like a smart manager for your apps. It helps them deploy, grow, and stay organized. To talk to Kubernetes, you use a tool called
kubectl
which is a command line interface
ETCD: Your Kubernetes Memory
Think of etcd as Kubernetes' memory. It stores information about the nodes in a simple key-value format, such as which containers are on which node and when they were loaded. All this data is kept in a special place called etcd.
Kube-scheduler: The Smart Deployer
Kube-scheduler is like a smart organizer. It finds the perfect node to put your container based on its needs, how much space is available on worker nodes, and any other rules you've set.
Controller Manager: The Node Manager
The controller manager is in charge of handling new nodes joining the cluster and dealing with situations where nodes go missing. It's like the person who welcomes new members to a club.
Kube-api server: The Commander
The kube-api server is the commander of the Kubernetes army. It manages everything that happens within the cluster and provides an interface for people outside the cluster to make things happen.
Kubelet: The Watchful Guard
Kubelet is like a guardian for each node in the cluster. It takes orders from the kube-api server and makes sure everything runs smoothly. The kube-api server also keeps an eye on kubelet to know what's happening with nodes and containers.
Kube-proxy: The Traffic Cop
Kube-proxy makes sure the traffic rules are set on worker nodes so containers can talk to each other.
Kubernetes Runtime Environment: Where the Magic Happens
Kubectl: Your Command Center
Kubectl is like a remote control for your Kubernetes cluster. It's how you give orders and get things done. You can tell kubectl what to do with simple commands.
Pods: Where Your Apps Live
A pod is like a cozy home for your apps. It's the smallest thing Kubernetes can manage, and it's where your containers live. Think of a pod as a special place where different parts of your app can talk to each other. But be careful – if a pod's home (the node) goes away, the pod disappears too. It's not built to survive the destruction of its home.
Kubernetes Deployment Script
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-app-container
image: your-app-image:latest
ports:
- containerPort: 80
env:
- name: ENV_VARIABLE_NAME
value: "your-environment-value"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-label-key
operator: In
values:
- node-label-value
apiVersion
specifies the Kubernetes API version for deployments.kind
specifies that this is a deployment.metadata
includes information like the name of your deployment.spec
defines the desired state for your deployment. You can adjust the number of replicas, the container image, and the container port to fit your application's requirements.env
allows you to specify environment variables for your container.resources
define resource requests and limits for memory and CPU.affinity
enables you to set node affinity based on node labels.
Make sure to replace your-app-deployment
, your-app
, your-app-container
, and your-app-image:latest
with the appropriate values for your application. This script will create a deployment with three replica pods.
You can save this script in a YAML file, and then apply it to your Kubernetes cluster using the kubectl apply -f your-script.yaml
command. This will deploy your application on Kubernetes.
Service
In Kubernetes, a service is a bit like a group of pods. It's a way to organize and access these pods.
There are three main types of services:
NodePort: This service type lets you access your pods from outside the cluster. It's like opening a door on each node so that external traffic can reach your pods.
apiVersion: v1 kind: Service metadata: name: my-nodeport-service spec: type: NodePort ports: - port: 80 targetPort: 80 selector: app: my-app
ClusterIP: These services make your pods accessible only from within the cluster. It's like an internal phone system that lets pods talk to each other.
apiVersion: v1 kind: Service metadata: name: my-clusterip-service spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: app: my-app
LoadBalancer: This type sets up a load balancer that can distribute external traffic to your pods. It's like having a receptionist who directs calls to the right person.
apiVersion: v1 kind: Service metadata: name: my-loadbalancer-service spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: my-app
Services help manage how your pods are reached and used, making it easier to run your applications.
Kubernetes Cheet Sheet
Kubernetes Trouble Shoot
📚Resources
1) rootsongjc/kubernetes-handbook/
2) https://kubernetes.io/docs/home/
3) https://learnk8s.io/troubleshooting-deployments
🙏Thank you for reading...
✈️ Linkedin: https://www.linkedin.com/in/vgirish10/