Professional Services

Understanding Kubernetes within containers

Kubernetes is a system for automating deployment, scaling, and the management of containerized applications. It is an open-source system providing a platform to run containers on-premises (off-cloud), hybrid, or in a public cloud infrastructure.  This blog will explain the different components of Kubernetes and the standard definitions and how they work within containerized applications. 

Firstly, let’s look at the Kubernetes cluster. A Kubernetes cluster is a system where containers run and cluster in a set of worker machines called nodes that run containerized applications. Every cluster has at least one worker node to run the applications. The worker node will host the Pods which can run containers.  Below you’ll find the list of some components related to Kubernetes and an explanation of their role within containerized applications.

  • The Kubernetes Engine – is a system for management and orchestration of the containers. It is used to create or manage a container on clusters and to create container Pods, replication controllers, jobs, services or load balancers.
  • YAML – a data serialization language and used to work with a Kubernetes cluster. It is used to write configuration files to apply on a cluster.
  • Node – it can be a virtual or physical machine depending on the cluster, but the node runs the actual Pods and provides an environment for containers to run on.
  • Pods – the smallest deployable units of computing that can be created and managed in Kubernetes.

Below is an example Pod creationYAML config for a Google Cloud cluster:

In this example the Pod name and label specifications define the metadata of the Pod, and the Pod spec has an image tag from which it will pull the image and create the container. The command tag has the command to run when the Pod is created, and the container has been instantiated. The arguments tag will have the arguments for that command. In the given example a shell command is instantiated and executes the tomcat start-up script.

  • Kubectl – is the key command to work on any Kubernetes engine and a command-line tool that allows users to run commands against Kubernetes. See example of Kubectl command with applying a yaml: kubectl apply -f /path-to-yaml
  • ReplicaSet’s – maintain a stable set of replica Pods running on a cluster at any given time. It guarantees that the specified number of identical Pods are available at any given time.
  • Deployment – a set of multiple and identical Pods with no unique identities. A Deployment runs multiple replicas of Pods and automatically replaces Pods/instances if they fail. Deployments can scale the number of replica Pods and rollout of updated code in a controlled manner or roll back to an earlier deployment version easily.
  • Service – a logical abstraction for a deployed group of Pods in a cluster. As Pods are vulnerable and be destroyed and recreated, it will be difficult to work with a specific Pod as it can terminate unexpectedly.
  • Horizontal Pod Auto (HPA) Scaler – will automatically scale the number of Pods in a replication controller, deployment, replica set or stateful set based on observed metrics of the resource utilization. It continuously monitors the metrics of the configured components and works based on the configurations.  For example, if it is required to monitor the CPU utilization continually and scale up the Pods availability, in such case HPA can be used to auto scale the Pods: kubectl autoscale deployment  tomcat-deployment –cpu-percent=80 –min=5 –max=10. This example will auto scale a deployment called tomcat-deployment to maintain at least 5 Pods in normal conditions and increase this to a maximum of 10 Pods on the condition that CPU usage exceeds 80%.
    • Helm is a package manager for Kubernetes.

    The components mentioned above work together to provide the seamless experience of application services in which the failed components will be auto replaced and maintained and will be scaled on required based on the predefined rules on metrics, which would prevent the application crash. This will reduce the manual intervention and increases the reliability and availability of the system.

    Kubernetes can manage the scaling and failover of applications and provides deployment patterns and many other features for containers. Kubernetes is a cost-effective solution for the applications which need to orchestrate containerized applications to run on a cluster of hosts simplified CI/CD.

    To learn more about the containerization of OpenText products, please contact OpenText™ Professional Services.

    Author:   Jayaram Patnala – Architect, Enterprise Managed Services

    Professional Services

    OpenText Professional Services offers the largest pool of OpenText EIM product and solution certified experts in the world. They bring market-leading field experience, knowledge, and innovative creativity from experience spanning more than 25 years and over 40,000 engagements.

    Related Posts

    Back to top button