Lifecycle

The lifecycle of a Kubernetes application consists of several stages, from creation to deletion. As described in previous sections, Kubernetes will maintain the desired state of your application (based on what’s described in the configuration files) and ensure that the application is running as expected.

graph TD
    A[Define Application Specification] -->|Create YAML/JSON manifests| B[Apply to Cluster]
    B -->|kubectl apply| C[Kubernetes API Server]
    C -->|Stores Configuration| D[etcd - Cluster State]
    C -->|Schedules Pods| E[Kube Scheduler]
    E -->|Assigns to Nodes| F[Kubelet on Nodes]
    F -->|Starts Containers| G[Pods Running]
    G -->|Expose via Services| H[Service Discovery]
    G -->|Monitored by Probes| I[Liveness and Readiness Checks]
    H -->|External Access| J[Ingress or Load Balancer]
    G -->|Handles Scaling| K[HorizontalPodAutoscaler]
    K -->|Adjusts Replicas| F
    G -->|Handles Updates| L[Rolling Updates/Canary Deployments]
    G -->|Handles Failures| M[Self-Healing Mechanisms]
    M -->|Restarts/Reschedules Pods| F
    L -->|Updated Pods Running| G

Health Checks

Kubernetes provides two types of health checks to monitor the health of your application:

Liveness Probes

A liveness probe is used to determine if a container is running correctly. If the liveness probe fails, Kubernetes will restart the container to restore the application to a healthy state.

Readiness Probes

A readiness probe is used to determine if a container is ready to accept traffic. If the readiness probe fails, Kubernetes will stop sending traffic to the container until it passes the probe.

Rolling Updates & Rollbacks

Kubernetes supports rolling updates and rollbacks for your applications. This allows you to update your application without downtime and easily revert to a previous version if the update fails.

Self-Healing Mechanisms

Kubernetes provides self-healing mechanisms to ensure that your application remains available and responsive. If a pod fails, Kubernetes will automatically restart the pod or reschedule it on another node to maintain the desired state.

Scaling

Kubernetes provides several mechanisms for scaling your application:

  • Horizontal Pod Autoscaler (HPA): Automatically scales the number of replicas based on CPU or memory utilization.
  • Vertical Pod Autoscaler (VPA): Automatically adjusts the resource requests and limits for your pods based on resource utilization.
  • Cluster Autoscaler: Automatically adjusts the size of your cluster based on resource utilization.
  • Manual Scaling: You can manually scale your application by changing the number of replicas in the deployment configuration.
  • etc.

Ingress & Load Balancing

To expose your application to external traffic, you can use an Ingress resource or a Load Balancer service. Ingress allows you to define rules for routing external traffic to your services, while Load Balancer services provide a stable IP address for external access.

Cleanup

When you no longer need an application or resource, you can delete it from the cluster using kubectl delete. This will remove the resource from the cluster and free up any resources associated with it.