What are Pods in Kubernetes?

What are Pods in Kubernetes?

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific “logical host”: it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation — the same things that isolate a container. Within a Pod’s context, the individual applications may have further sub-isolations applied.

A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.

For Example:

When you created a Deployment in Module 2, Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers.

Those resources include:

  • Shared storage, as Volumes

  • Networking, as a unique cluster IP address

  • Information about how to run each container, such as the container image version or specific ports to use

A Pod models an application-specific “logical host” and can contain different application containers which are relatively tightly coupled.

For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.

Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

Pods in a Kubernetes cluster are used in two main ways:

  • Single Container Pod: A single-container Pod is the most common type of Pod in Kubernetes. It contains only one container and is typically used to run a standalone application or process. This type of Pod is straightforward and represents a single unit of deployment in Kubernetes.

  • Multiple Containers Pod: A multi-container Pod in Kubernetes contains more than one container running together as a single unit. These containers share the same network namespace and can communicate with each other using localhost. They also share the same storage volumes if configured. Multi-container Pods are useful for closely coupled applications that need to work together.

Kubernetes Pods Key Points:

  • Kubernetes pod is the smallest unit in the Kubernetes.

  • Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level structure called a pod.

  • One pod can have multiple containers so that tightly coupled application can have multiple tier containers in one pod.

  • Kubernetes created pods to support many container runtime environments, so we can create pods of multiple runtime environments like docker, rocket etc.

  • It is also the smallest deployable unit that can be created, schedule, and managed on a Kubernetes cluster. Each pod is assigned a unique IP address within the cluster.

  • In pod we can have multiple pods and multiple volumes, these containers will use the same network and the port assigned to the pod will be the same so if any of the containers is using port 80 inside the pod then other containers in that same pod can not use port 80.

  • Pods can hold multiple containers as well, but you should limit yourself when possible. Because pods are scaled up and down as a unit, all containers in a pod must scale together, regardless of their individual needs. This leads to wasted resources.

  • Any containers in the same pod will share the same storage volumes and network resources and communicate using localhost.

  • K8s uses YAML to describe the desired state of the containers in a pod. This is also called a Pod Spec. These objects are passed to the kubelet through the API server.

  • Pods are used as the unit of replication in Kubernetes. If your application becomes too popular and a single pod instance can’t carry the load, Kubernetes can be configured to deploy new replicas of your pod to the cluster as necessary.

  • Volume can be shared between multiple pods.

Kubernetes Pod Lifecycle:

Pods follow a defined lifecycle, starting in the Pending phase, moving through Running if at least one of its primary containers starts OK, and then through either the Succeeded or Failed phases depending on whether any container in the Pod terminated in failure.

When Pods are created, assigned a unique ID (UID), and scheduled to run on nodes where they remain until termination (according to restart policy) or deletion. If a Node dies, the Pods running on (or scheduled to run on) that node are marked for deletion. The control plane marks the Pods for removal after a timeout period.

One needs to know about all these phases to effectively monitor and troubleshoot Kubernetes workloads. Below are the possible phases.

  1. Pending — Pod Creation is successful, but scheduling is not yet done as it waits for the underlying node to have sufficient resources allocated or available, such as CPU, memory, and storage.

  2. Running — The Pod enters the Running state when it has been successfully scheduled to a node, and the containers within the Pod are executing their tasks and operating as expected.

  3. Succeeded — The Pod enters the Succeeded state when all containers within the Pod have successfully terminated after completing their tasks.

  4. Failed — The Pod enters the Failed state when at least one primary container in the Pod terminates with a non-zero exit code or is killed by the kubelet. For Example: When the application inside the container is unable to start due to a configuration issue and the container either exited with non-zero status or was terminated by the system.

  5. CrashLoopBackOff — The Pod enters the CrashLoopBackOff state when the application inside the container repeatedly fails to start due to incorrect or missing configuration properties, causing the container to crash and Kubernetes to back off before retrying.

  6. Unknown — The Pod enters the Unknown state when there is an error or no communication with the node where the Pod is running, preventing Kubernetes from determining its current status.

  7. Terminating — The Pod enters the Terminating state when a user deletes it or when the controller automatically terminates it, typically during a scaling down or update operation. The Pod begins the process of shutting down its containers and cleaning up resources.

  8. Evicted — The Pod enters the Evicted state when it is removed from a node due to a lack of resources (e.g., CPU, memory) or due to node maintenance, such as node draining or resource pressure.

Whilst a Pod is running, the kubelet is able to restart containers to handle some kind of faults. Within a Pod, Kubernetes tracks different container states and determines what action to take to make the Pod healthy again.

Container States in Kubernetes:

Kubernetes tracks the state of each container inside a Pod alongside the overall Pod phase. These states help monitor and manage container behavior. Lifecycle hooks can be used to trigger actions at specific points in a container’s lifecycle.

Once the scheduler assigns a Pod to a Node, the kubelet initiates the creation of containers using the configured container runtime. Containers can exist in one of three states: Waiting, Running, or Terminated.

To check the state of a Pod’s containers, you can use kubectl describe pod <name-of-pod>. The output shows the state for each container within that Pod.

  1. Wating: If a container is not in either the Running or Terminated state, it is Waiting. A container in the Waiting state is still running the operations it requires in order to complete start up:

    For example, pulling the container image from a container image registry, or applying Secret data. When you use kubectl to query a Pod with a container that is Waiting, you also see a Reason field to summarize why the container is in that state.

  2. Running: The Running status indicates that a container is executing without issues. If there was a postStart hook configured, it has already executed and finished. When you use kubectl to query a Pod with a container that is Running, you also see information about when the container entered the Running state.

  3. Terminated: A container in the Terminated state began execution and then either ran to completion or failed for some reason. When you use kubectl to query a Pod with a container that is Terminated, you see a reason, an exit code, and the start and finish time for that container's period of execution.

    If a container has a preStop hook configured, this hook runs before the container enters the Terminated state.

Container restart policy:

The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.

  • Always: Automatically restarts the container after any termination.

  • OnFailure: Only restarts the container if it exits with an error (non-zero exit status).

  • Never: Does not automatically restart the terminated container.

When the kubelet is handling container restarts according to the configured restart policy, that only applies to restarts that make replacement containers inside the same Pod and running on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential backoff delay (10s, 20s, 40s, …), that is capped at 300 seconds (5 minutes). Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container.

Pod conditions:

A Pod has a PodStatus, which has an array of PodConditions through which the Pod has or has not passed. Kubelet manages the following PodConditions:

  • PodScheduled: the Pod has been scheduled to a node.

  • PodReadyToStartContainers: (beta feature; enabled by default) the Pod sandbox has been successfully created and networking configured.

  • ContainersReady: all containers in the Pod are ready.

  • Initialized: all init containers have completed successfully.

  • Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.

Termination of Pods in Kubernetes:

Pod termination occurs when a Pod is deleted by a user or a controller, or when it is evicted due to resource constraints or node maintenance. Kubernetes ensures a graceful shutdown process to maintain application stability and minimize disruptions.

Pod termination flow:

  1. You use the kubectl tool to manually delete a specific Pod, with the default grace period (30 seconds).

  2. Kubernetes sends a SIGTERM signal to the containers in the Pod.

  3. Containers are given time (defined by the terminationGracePeriodSeconds) to perform cleanup tasks, such as closing connections or saving state.

The Pod in the API server is updated with the time beyond which the Pod is considered “dead” along with the grace period. If you use kubectl describe to check the Pod you're deleting, that Pod shows up as "Terminating". On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod shutdown process.

How to Create a Pod?

You can create a Pod in Kubernetes using YAML configuration files or the kubectl command.

Using YAML File:

A Pod is typically defined in a YAML file, specifying its configuration, such as the container image, ports, resources, and environment variables. Here’s a basic example:

Example YAML File (pod.yaml)

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx:latest
    ports:
    - containerPort: 80

This YAML file creates a Pod with the following:

  • Pod name: my-pod

  • Container name: my-container

  • Image: nginx:latest

  • Port: Exposes port 80 on the container

Create the Pod with the kubectl apply command:

kubectl apply -f pod.yaml

To check the Pod’s status:

kubectl get pods

Key Notes:

  • The YAML configuration provides more flexibility and is the recommended approach for managing Pods, especially in production environments.

  • Pods can be defined as part of a higher-level object like a Deployment, which ensures Pods are maintained and scaled automatically.

Ensuring Pod Health in Kubernetes:

In Kubernetes, ensuring the health of Pods is critical for maintaining the availability and reliability of applications. Kubernetes offers several mechanisms to monitor and manage the health of Pods.

There are three types of Probes:

  1. ss Probes — A Liveness Probe checks if a container within a Pod is still running as expected. If the container fails the probe, Kubernetes restarts it automatically.

  2. Readiness Probes — A Readiness Probe checks if the container is ready to handle traffic. If the probe fails, the endpoints controller removes the IP address of the Pod from the endpoints of all Services that match the Pod.

  3. Startup Probe — A Startup Probe is used to determine if a container has successfully started. It is particularly useful for containers that have long initialization times and might need more time to start before they are considered healthy.

To perform diagnostic checks, the Kubelet in Kubernetes will call a Handler implemented by the container. These handlers are responsible for performing health checks on containers within Pods, and there are three primary types:

  1. HTTP GET Handler: The HTTP GET Handler is used in Kubernetes to check the health of a container by sending an HTTP GET request to a specific URL path within the container. This handler is typically used when the container exposes an HTTP endpoint to report its health status, often referred to as a health check endpoint.

  2. TCP Socket Handler: The TCP Socket Handler in Kubernetes is used to perform health checks by attempting to open a TCP connection to a specified port on the container. If the connection is successful, the container is considered healthy. If the connection fails (i.e., the container is not listening on the specified port), the container is considered unhealthy.

  3. ExecAction Handler: The ExecAction in Kubernetes is a type of handler used for performing health checks or executing commands inside a container. When configured, Kubernetes runs a specific command inside the container, and if the command returns a successful exit code (typically 0), the container is considered healthy. If the command returns a non-zero exit code, the container is considered unhealthy.

Did you find this article valuable?

Support Megha Sharma's Blog by becoming a sponsor. Any amount is appreciated!