Deployment → ReplicaSet → Pod→ Container: A Deployment manages ReplicaSets, which then manage Pods. The Deployment abstracts updates by creating a new ReplicaSet as needed, and the ReplicaSet ensures that the desired number of Pods is maintained and pods manages containers.
Service → Pod: A Service provides access to Pods, matching them by labels and distributing traffic.
Ingress → Service → Pod: Ingress manages HTTP/HTTPS access to Services, which then route traffic to matching Pods.
PVC → PV → Pod: PVCs request storage from PVs, which are then used by Pods needing persistent storage.
ConfigMap/Secret → Pod: ConfigMaps and Secrets store configuration data for Pods, providing flexibility for different environments or sensitive data handling.
Kubernetes Object Management
The kubectl
command-line tool supports several different ways to create and manage Kubernetes objects.
Management techniques:
Management technique Operates on Recommended environment
──────────────────────────────────────────────────────────────────────
Imperative commands │ Live objects │ Development projects
│ │
Imperative object │ Individual files │ Production projects
configuration │ │
│ │
Declarative object │ Directories of │ Production projects
configuration │ files │
👉 Imperative commands:
Imperative commands in Kubernetes are commands executed directly in the CLI (command-line interface) to create, update, or delete resources immediately. This approach tells Kubernetes “what to do” right away, which is helpful for quick, one-time actions or testing. Imperative commands offer a more hands-on, procedural way of managing Kubernetes objects, as opposed to declarative approaches that specify the desired end state in configuration files.
When using imperative commands, a user operates directly on live objects in a cluster. The user provides operations to the kubectl
command as arguments or flags.
This is the recommended way to get started or to run a one-off task in a cluster. Because this technique operates directly on live objects, it provides no history of previous configurations.
Common Use Cases:
Creating Resources: Quickly create resources like Pods, Deployments, and Services.
Modifying Resources: Scale resources, edit configurations, or expose Services.
Deleting Resources: Remove resources when they’re no longer needed.
Examples of Imperative Commands: Some commonly used imperative commands in Kubernetes.
Creating a Pod:
kubectl run my-pod --image=nginx
This command creates a Pod named my-pod
running the nginx
image.
Creating a Deployment:
kubectl create deployment my-deployment --image=nginx
This creates a Deployment named my-deployment
using the nginx
container image.
👉 Imperative object configuration:
Imperative object configuration in Kubernetes refers to creating, updating, or managing resources by specifying their full configuration in the command line, usually in a YAML or JSON format. This is a more structured form of imperative commands, where you specify the entire object’s configuration directly in the command, which makes it a bit closer to the declarative approach, but it’s still executed immediately without needing to store configuration files.
In imperative object configuration, the kubectl command specifies the operation (create, replace, etc.), optional flags and at least one file name. The file specified must contain a full definition of the object in YAML or JSON format.
Examples of Imperative Object Configuration:
Create the objects defined in a configuration file:
kubectl create -f nginx.yaml
Delete the objects defined in two configuration files:
kubectl delete -f nginx.yaml -f redis.yaml
Update the objects defined in a configuration file by overwriting the live configuration:
kubectl replace -f nginx.yaml
Creating a Pod with YAML Configuration in CLI:
kubectl run my-pod --image=nginx --restart=Never --dry-run=client -o yaml | kubectl apply -f -
This command creates a Pod configuration using a YAML structure in the CLI. It runs the nginx
container and doesn't restart it upon failure (--restart=Never
). The --dry-run=client -o yaml
flag outputs the YAML configuration, which is then piped and applied.
👉Declarative object configuration :
Declarative object configuration in Kubernetes is a method of managing resources by defining the desired state of the objects in configuration files, typically YAML or JSON, and then applying these files to the cluster. This approach allows you to specify the final state you want, and Kubernetes takes care of ensuring that the actual state matches this desired state. Declarative configuration is widely used for managing complex deployments and maintaining consistency because it allows you to track, version, and easily replicate configurations.
When using declarative object configuration, a user operates on object configuration files stored locally, however the user does not define the operations to be taken on the files. Create, update, and delete operations are automatically detected per-object by kubectl
. This enables working on directories, where different operations might be needed for different objects.
Examples Declarative Object Configuration:
Define the Configuration in YAML or JSON:
Create a configuration file that defines the desired state of a Kubernetes object.
For example, a Deployment configuration file might look likemy-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
Apply the Configuration to the Cluster
Use kubectl apply -f <filename.yaml>
to create or update resources based on the configuration file.
Kubernetes will compare the desired state defined in the file to the current state of the cluster and make only the necessary changes.
kubectl apply -f my-deployment.yaml
Kubernetes Configuration techniques:
- All in one single node installation: An all-in-one single node installation of Kubernetes is a simplified deployment where the Kubernetes control plane (API server, controller manager, scheduler) and the worker node (Kubelet, Kube Proxy, etc.) run on the same machine. This setup is ideal for development, testing, and learning purposes but should not be used in production environments due to lack of scalability and high availability.
There are multiple ways to perform an all-in-one installation of Kubernetes, but two of the most common methods are:
Using Minikube
Using Kubeadm
- Single node etcd, Single master master and Multi-worker installation: In this setup, we have a single master node, Which also run a single-node etcd instance Multiple worker nodes are connected to the master node.
Setting up a Single Master and Multi-Worker Kubernetes cluster is a more basic configuration compared to the multi-master setup but still provides scalability by adding multiple worker nodes. This setup is useful for smaller environments or testing scenarios but lacks high availability on the control plane. If the master node fails, the cluster may experience downtime, so for production, a multi-master setup is typically recommended.
In this configuration:
Single Master Node: The control plane (API server, scheduler, controller manager) is on a single master node.
Multi-Worker Nodes: Multiple worker nodes are used to run the application workloads as Pods.
- Single node etcd, multi master master and Multi-worker installation: In this setup, we have a Multiple master nodes, Which work on HA mode, but we have a single-node etcd instance. Multiple worker nodes are connected to the master node.
Setting up a Multi-Master and Multi-Worker Kubernetes cluster provides high availability and fault tolerance by distributing control plane components (masters) and worker nodes across multiple machines. This setup ensures that the cluster remains operational even if one or more master or worker nodes fail.
In this configuration:
Multi-Master Nodes: There are multiple master nodes for redundancy, running Kubernetes control plane components like the API server, scheduler, and controller manager.
Multi-Worker Nodes: The worker nodes are responsible for running the actual application workloads in the form of Pods.