kubernetes Archives - Lightrun https://lightrun.com/tag/kubernetes/ Developer Observability Platform Mon, 10 Jul 2023 12:02:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://lightrun.com/wp-content/uploads/2022/11/cropped-fav-1-32x32.png kubernetes Archives - Lightrun https://lightrun.com/tag/kubernetes/ 32 32 The Complete Kubectl Cheat Sheet [PDF download] https://lightrun.com/kubectl-cheat-sheet/ Tue, 27 Sep 2022 15:14:01 +0000 https://lightrun.com/?p=8037 Kubernetes is one of the most well-known open-source systems for automating and scaling containerized applications. Usually, you declare the state of the desired environment, and the system will work to keep that state stable. To make changes “on the fly,” you must engage with the Kubernetes API.  This is exactly where the Kubernetes command-line tool, […]

The post The Complete Kubectl Cheat Sheet [PDF download] appeared first on Lightrun.

]]>
Kubernetes is one of the most well-known open-source systems for automating and scaling containerized applications. Usually, you declare the state of the desired environment, and the system will work to keep that state stable. To make changes “on the fly,” you must engage with the Kubernetes API. 

This is exactly where the Kubernetes command-line tool, Kubectl, comes in. Whether you’re new to kubectl and want to learn more, or you’ve been working with it for years, this cheat sheet is exactly what you need to start sending commands to your Kubernetes clusters. This article will cover all the essential Kubectl concepts and commands. We recommend you have the PDF cheat sheet version on hand when your application misbehaves and you need a quick reference guide to help you sort it out. 

What is Kubectl?

Kubectl is the Kubernetes command-line tool. It allows developers to communicate with a Kubernetes cluster’s control pane. You can inspect and manage cluster resources, deploy applications, and view logs with it. Of course, to use the tool, you’ll need to install it first. You can install Kubectl on Mac, Linux, and Windows. It’s important to use a Kubectl version within one minor version difference from your cluster. The Kubernetes install tools docs have all the instructions you’ll need for your preferred work environment.

What is kubectl?

Kubectl commands list

Below is a list of all the relevant commands you can use in Kubectl, separated by function. 

Kubectl objects

Kubernetes objects are persistent entities in the Kubernetes system. These entities are used to represent the state of your cluster. It can be considered a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating it, you’re effectively telling the Kubernetes system what your cluster’s desired state looks like.

Create multiple YAML objects from stdin

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
 name: busybox-rest
Spec:
 Containers:
 - name: busybox
   image: busybox:1.28
   Args:
   - rest
   - "1000000"
—
apiVersion: v1
kind: Pod
Metadata:
 name: busybox-rest-less
Spec:
 Containers:
 - name: busybox
   image: busybox:1.28
   Args:
   - rest
   - "1000"
EOF  Create a secret with several keys
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
Metadata:
  name: origsecret
type: Opaque
Data:
  password: $(echo -n "f44lar7" | base64 -w0)
  username: $(echo -n "john" | base64 -w0)
EOF


Resources

Kubectl allows you to create, update, patch, edit, scale, and delete resources. You can also use the interface to look for and view information about various resources. In this context, a resource is an endpoint in the Kubernetes API. If you aim to work with multiple resources, it might be easier to list them all in a new manifest file – a YAML or JSON file, and use kubectl as a bridge between your new manifest and the Kubernetes API.

Viewing and finding resources

List all services in the namespace

kubectl get services List all pods in all namespaces 
kubectl get pods --all-namespaces

List all pods in the current namespace, with additional details

kubectl get pods -o wide

List a particular deployment

kubectl get deployment dep-one

List all pods in the namespace

kubectl get pods

Get a pod’s manifest YAML

kubectl get pod pod-one -o yaml

Describe pods/nodes with verbose output\

kubectl describe nodes my-node
kubectl describe pods my-pod

List services sorted by name

kubectl get services --sort-by=.metadata.name

List pods sorted by restart count

kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

List PersistentVolumes sorted by capacity

kubectl get pv --sort-by=.spec.capacity.storage

Get the version label of all pods with the label app=derollo

kubectl get pods --selector=app=derollo -o \ jsonpath='{.items[*].metadata.labels.version}'

Retrieve the value of a key with dots, e.g. ‘ca.crt’

kubectl get configmap myconfig \
  -o jsonpath='{.data.ca\.crt}'

Retrieve a base64 encoded value with dashes instead of underscores

kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'

Get all worker nodes (use a selector to exclude results that have a label named ‘node-role.kubernetes.io/control-sheet’)

kubectl get node --selector='!node-role.kubernetes.io/control-sheet'

Get all running pods in the namespace

kubectl get pods --field-selector=status.phase=Running

Get ExternalIPs of all nodes

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

List Names of Pods that belong to Particular RC “jq” command useful for transformations that are too complex for jsonpath; it can be found at https://stedolan.github.io/jq/.

sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

Show labels for all pods (or any other Kubernetes object that supports labeling)

kubectl get pods --show-labels

Check which nodes are ready

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
 && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

Output decoded secrets without external tools

kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'

List all Secrets currently in use by a pod

kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

List all containerIDs of initContainer of all pods. It can be helpful when cleaning up stopped containers or avoiding removing initContainers.

kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3

List Events sorted by timestamp

kubectl get events --sort-by=.metadata.creationTimestamp

Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.

kubectl diff -f ./my-manifest.yaml

Produce a period-delimited tree of all keys returned for nodes. It can be helpful when locating a key within a complex nested JSON structure.

kubectl get nodes -o json | jq -c 'paths|join(".")'

Produce a period-delimited tree of all keys returned for pods, etc.

kubectl get pods -o json | jq -c 'paths|join(".")'

Produce ENV for all pods, assuming you have a default container for the pods, default namespace, and the `env` command is supported. It’s helpful when running any supported command across all pods, not just `env`

for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done

Get a deployment’s status subresource

kubectl get deployment nginx-deployment --subresource=status

Creating resources

Create from a single file:

kubectl apply -f ./my-manifest.yaml

Create from multiple files:

kubectl apply -f ./my1.yaml -f ./my2.yaml

Create resources in all manifest files in dir:

kubectl apply -f ./dir

Create resources from url:

kubectl apply -f https://git.io/vPieo

Updating resources:

Roll the update “abc” containers of “frontend” deployment, updating the image:

kubectl set image deployment/frontend abc=image:v2

Rollback to the previous deployment:

kubectl rollout undo deployment/frontend

Rollback to a specific revision:

kubectl rollout undo deployment/frontend --to-revision=3 

Watch the rolling update status of “backend” deployment until completion:

kubectl rollout status -w deployment/backend 

Rollout and restart of the “backend” deployment:

kubectl rollout restart deployment/frontend

Force replace, delete, and then re-create the resource. Note that this command may cause a service outage.

kubectl replace --force -f ./pod.json

Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 3100:

kubectl expose rc nginx --port=80 --target-port=3100

Update a single-container pod’s image version (tag) to v5:

kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v5/' | kubectl replace -f -

Add a label (‘timely’) to the pod my-pod:

kubectl label pods my-pod new-label=timely

Add an annotation to the pod my-pod:

kubectl annotate pods my-pod icon-url=http://goo.g3/XCMGh

Autoscale a deployment named “ipsum:”

kubectl autoscale deployment ipsumn --min=2 --max=10

Patching resources

Partially update a node:

kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

Update a container’s image. You are required to use the spec.containers.name since it’s a merge key:

kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'

Update a container’s image using a JSON patch with arrays:

kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'

Deploy ludicrousPatch using a JSON patch with positional arrays:

kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/ludicrousPatch"}]'

Adda new element to a positional array:

kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'

Editing resources

Edit the service named “service-registry:”

kubectl edit svc/service-registry

Use an alternative editor:

KUBE_EDITOR="nano" kubectl edit svc/service-registry

Scaling resources

Scale replica set named ‘ipsum’ to 3:

kubectl scale --replicas=3 rs/ipsum

Scale a resource specified in “ipsum.yaml” to 3:

kubectl scale --replicas=3 -f ipsum.yaml

Scale mysql to 5 (when the deployment named mysql’s current size is 2):

kubectl scale --current-replicas=2 --replicas=5 deployment/mysql

Scale multiple replication controllers:

kubectl scale --replicas=5 rc/ipsum rc/lor rc/bac

Deleting resources

Delete a pod using the type and name specified in delpod.json:

kubectl delete -f ./delpod.json 

Delete a pod immediately:

kubectl delete pod unwanted -now

Delete pods and services with the same names “bak” and “far:”

kubectl delete pod, service bak for 

Delete pods and services with label name=delLabel:

kubectl delete pods, services -l name=delLabel

Delete all pods and services in namespace ns-del:

kubectl -n ns-del delete pod, svc --all 

Delete all pods matching the awk pattern3 or pattern5:

kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern3|pattern5/{print $1}' | xargs  kubectl delete -n mynamespace pod

Kubectl get is probably the most helpful command in collecting information. It allows you to retrieve information about all Kubernetes objects and nodes in the Kubernetes data plane. The most common objects you are likely to query are pods, services, deployments, stateful sets, and secrets. 

The get command offers a range of possible output formats:

-o wide is like verbose; that is, it adds more information, which is dependent on the type of objects being queried.

-o yaml and -o json output the complete current state of the object and likely include more information than the original manifest files.

-o jsonpath allows you to select the information you want from the full JSON of the -o json option using the jsonpath notation.

-o go-template allows you to apply Go templates for more advanced features. Feel free to skip this one if you’re not fluent in Golang.

Here are some examples:

List all pods in the default namespace:

kubectl get pod

Get more information about a given pod:

kubectl -n mynamespace get po mypod-0 -o wide

Get the full state in YAML of a given pod:

kubectl -n mynamespace get pods/mypod -o yaml

Get the services in the default namespace:

kubectl get svc

Get the value of a secret:

kubectl -n mynamespace get secrets MYSECRET \
    -o 'jsonpath={.data.DB_PASSWORD}' | base64 -d

Get the logs from a container:

kubectl logs mypod-0 -c myapp

Display endpoint information about the master and services in the cluster:

kubectl cluster-info

Display the Kubernetes version running on the client and server:

kubectl version

View the cluster configuration:

kubectl config view

List available API resources:

kubectl api-resources

List everything for all namespaces:

kubectl get all --all-namespaces

DaemonSet 

A DaemonSet ensures that Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

  • Cluster storage daemon that can be run on every node
  • Logs collection daemon that can be run on every node
  • Node monitoring daemon that can be run on every node

DemonSet

You can use a single DaemonSet to cover all use cases for all nodes or multiple sets, one for each type of daemon with different optional flags and different memory and cpu requests.

You can use shortcode ds to denote a DaemonSet

Shortcode = ds

List one or more daemonSets:

kubectl get daemonset

Edit and update the definition of one or more daemonSet:

kubectl edit daemonset <daemonset_name>

Delete a daemonSet

kubectl delete daemonset <daemonset_name>

Create a new daemonSet

kubectl create daemonset <daemonset_name>

Manage the rollout of a daemonSet

kubectl rollout daemonset

Display the detailed state of daemonSets within a namespace

kubectl describe ds <daemonset_name> -n <namespace_name>

Deployments

A deployment runs multiple copies of your application and automatically replaces any failed or unresponsive instances. The Kubernetes Deployment Controller manages deployments. The controller ensures that user requests are served through one or more instances of your application.

You can use shortcode deploy to denote deployment

Shortcode = deploy

List one or more deployments

kubectl get deployment

Display the detailed state of one or more deployments

kubectl describe deployment <deployment_name>

Edit and update the definition of one or more deployments on the server

kubectl edit deployment <deployment_name>

Create a new deployment

kubectl create deployment <deployment_name>

Delete deployments

kubectl delete deployment <deployment_name>

See the rollout status of a deployment

kubectl rollout status deployment <deployment_name>

Namespaces

In Kubernetes, namespaces enable exact selection for groups of resources within a single cluster. Resource names must be unique within a single namespace but not across multiple namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g., Deployments, Services, etc.) and not for cluster-wide objects (e.g., StorageClass, Nodes, PersistentVolumes, etc.).

You can use shortcode ns to denote namespace

Shortcode = ns

Create a namespace:

kubectl create namespace <namespace_name>

List one or more namespaces:

kubectl get namespace <namespace_name>

Display the detailed state of one or more namespace:

kubectl describe namespace <namespace_name>

Delete a namespace:

kubectl delete namespace <namespace_name>

Edit and update a namespace definition:

kubectl edit namespace <namespace_name>

Display all the resources used by a namespace:

kubectl top namespace <namespace_name>

Events 

A Kubernetes event is an object in the framework generated automatically in response to changes in other resources—like nodes, pods, or containers.

Kubernetes events can help you understand how Kubernetes resource decisions are made and so can be helpful in debugging. You can think of events like the breadcrumbs of Kubernetes.

Kubernetes Cluster Events

You can use shortcode ev to denote events.

Shortcode = ev

List all recent events for all system resources:

kubectl get events

List all events of type warning only:

kubectl get events --field-selector type=Warning

List all events (excluding Pod events):

kubectl get events --field-selector involvedObject.kind!=Pod

Pull all events for a single node with a specific name:

kubectl get events --field-selector involvedObject.kind=Node, involvedObject.name=<node_name>

Filter out normal events from a list of events:

kubectl get events --field-selector type!=Normal

Logs

System component logs record events in a cluster, which is helpful for debugging. Since logs are constantly updated, this will only display the latest logs. In a production environment, it’s recommended to use a log aggregator and do your searches and filtering through it.

There can be two types of logs, fine-grained (more details) and coarse-grained (fewer details). Coarse-grained logs represent errors within a component, while fine-grained logs represent step-by-step traces of events.

Print logs for a specific pod:

kubectl logs <pod_name>

Print the logs for the last hour for a pod:

kubectl logs --since=1h <pod_name>

Retrieve the most recent 20 lines of logs:

kubectl logs --tail=20 <pod_name>

Retrieve the logs from a service. Optionally you can select which container:

kubectl logs -f <service_name> [-c <$container>]

Print the logs for a pod:

kubectl logs -f <pod_name>

Print the logs for a container in a pod:

kubectl logs -c <container_name> <pod_name>

Get the output of the logs for a pod into a file named ‘pod.log:’

kubectl logs <pod_name> pod.log

Check the logs for a previously failed pod

kubectl logs --previous <pod_name>

ReplicaSets

RepliceSets ensure you have a stable set of replica pods operating as you have defined in the deployment file. You might use a ReplicaSet to confirm that identical pods are available. 

You can use shortcode rs to denote ReplicaSets.

Shortcode = rs

List all the ReplicaSets:

kubectl get replicasets

Show the detailed state of one or more ReplicaSets:

kubectl describe replicasets <replicaset_name>

Scale a ReplicaSet to x replicas instead of the current amount:

kubectl scale --replicas=[x]

Secrets 

A secret is an object containing some sensitive data like as a password, a token, or a key. This information is stored in a Pod specification or container image. Using a Secret prevents you from including confidential or sensitive information in your application code.

Create a new Secret:

kubectl create secret

List all Secrets:

kubectl get secrets

List all the required details about Secrets:

kubectl describe secrets

Delete a Secret:

kubectl delete secret <secret_name>

Helm

Helm is a Kubernetes deployment tool for automating the creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. All the following commands assume a Helm-deployed application.

Get details about the current release:

helm list

Get details about the release in all namespaces:

helm list --all-namespaces

Get details about the release in a specific namespace:

helm list --namespace jenkins

Get the values used in a specific application:

helm get values jenkins -n jenkins

Get all the information used in a specific application:

helm get all jenkins -n jenkins

Services

Services are an abstract way to expose an application running on a set of Pods as a network service.

Kubernetes assigns each pod with a unique IP address, and a single DNS can manage all the load across multiple pods. This allows you to treat the various pods as a sort of cloud-based black box containing the desired service.

You can use shortcode svc to denote Services.

Shortcode = svc

List one or more services:

kubectl get services

Show the detailed state of all services:

kubectl describe services

Expose a replication controller, service, deployment, or pod as a new Kubernetes service:

kubectl expose deployment [deployment_name]

Edit and update the definition of one or more services:

kubectl edit services

StatefulSet

StatefulSets represent a set of pods with unique, persistent identities and stable hostnames that GKE (Google Kubernetes Engine) maintains regardless of where they are scheduled. You can think of them like site URLs – they’ll (almost) always be there when you come to visit. The persistent disk storage associated with the StatefulSet is responsible for storing state information and other resilient data for the given StatefulSet pod.

You can use shortcode sts to denote StatefulSet.

Shortcode = sts

List a StatefulSet:

kubectl get statefulset

Delete StatefulSet only (not pods):

kubectl delete statefulset/[stateful_set_name] --cascade=false


In a nutshell

We’ve covered all the important actions you can take using Kubectl, including how to check your pods and clusters, create new objects, handle resources, and gather and display information. You can visit and revisit this cheat sheet whenever you need a little help. 

Managing your app and ensuring it runs smoothly can be time-consuming, especially if you don’t use an observability platform. Lightrun enables you to add logs, metrics, and traces to your app in real time while the app is running. Spend your time coding, not debugging. Request a demo to see how Lightrun works.

The post The Complete Kubectl Cheat Sheet [PDF download] appeared first on Lightrun.

]]>
Top 5 Debugging Tips for Kubernetes DaemonSet https://lightrun.com/top-5-debugging-tips-for-kubernetes-daemonset/ Tue, 23 Aug 2022 17:11:33 +0000 https://lightrun.com/?p=7676 Kubernetes is the most popular container orchestration tool for cloud-based web development. According to Statista, more than 50% of organizations used Kubernetes in 2021. This may not surprise you, as the orchestration tool provides some fantastic features to attract developers. DaemonSet is one of the highlighted features of Kubernetes, and it helps developers to improve […]

The post Top 5 Debugging Tips for Kubernetes DaemonSet appeared first on Lightrun.

]]>
Kubernetes is the most popular container orchestration tool for cloud-based web development. According to Statista, more than 50% of organizations used Kubernetes in 2021. This may not surprise you, as the orchestration tool provides some fantastic features to attract developers. DaemonSet is one of the highlighted features of Kubernetes, and it helps developers to improve cluster performance and reliability. Although it is widely used, debugging DaemonSet can be challenging since it is in the application layer. So, this article will discuss five essential tips to help you debug Kubernetes DaemonSet.

What is a Kubernetes DaemonSet? 

Kubernetes DaemonSet is a Kubernetes object that ensures all nodes (or selected subset) in a cluster run a single copy of a pod.

When you add new nodes to a cluster, the DaemonSet controller automatically adds a pod to that node. Similarly, pods will be erased when a node is deleted from the cluster.

Most importantly, DaemonSet improves the performance and reliability of your Kubernetes cluster while distributing tasks across all nodes. Some developers argue that we do not need to consider where pods run on a Kubernetes cluster. But DaemonSet is efficient for long-running services like log collection, node monitoring, and cluster storage. Also, you can create multiple DaemonSets for a single type of daemon using different flags, memory capacities, and CPU requests.

DaemonSet Pods

Taints and tolerations for DaemonSet

Taints and tolerations are used together to stop pods from scheduling onto inappropriate nodes. You can apply one or more taints to a node, and it will not allow the node to accept pods that do not tolerate the taints. On the other hand, tolerations enable the scheduler to find nodes with matching taints and schedule pods on them. However, using tolerations does not ensure scheduling.

Top 5 debugging tips for Kubernetes DaemonSet 

Now that we have a broad understanding of Kubernetes DaemonSet, let’s discuss a few tips you can use to ease the Kubernetes DaemonSet debugging process.

1. Find unhealthy pods

A DaemonSet is considered unhealthy when it does not have one pod running in each node. Unhealthy DaemonSets are caused mainly by pending pods or pods stuck in a crash loop.

You can easily find unhealthy pods in a Kubernetes cluster by listing all the available pods. The below command will list all the pods in the cluster with their statuses.

kubectl get pod -l app=[label]

You can identify the unhealthy pods from their status once they are listed. Pods withcrashloopbackoff, pending, and evicted statuses are considered unhealthy. Once you identify the unhealthy pods, you can use the below commands to get more details and logs on the pod.

// Get more information about the pod
kubectl describe pod [pod-name]

// Get pod logs
kubectl logs [pod-name]

Finally, you can use the pod information and logs to determine the issue in the DaemonSet. This approach saves you a lot of time since you do not need to debug all the pods in the cluster to find the problem. You can prioritize the unhealthy pods first.

May the pods ever be in your favor meme

2. Resolve the nodes that don’t have enough resources 

As mentioned, pods with crashloopbackoff status are considered unhealthy pods. Mainly, this error is caused by a lack of resources available to run the pod. You can follow the below steps to troubleshoot pods quickly with crashloopbackoff status.

First, you need to find the node that runs the unhealthy pod:

kubectl get pod [pod-name] -o wide Then, you can use the node name from the above command result to monitor the available node resources:
kubectl top node [node-name]

If you notice a lack of resources in the node, you can resolve it by:

  • Decreasing the memory and CPU of the DaemonSet.
  • Upgrading nodes to accommodate more pods.
  • Moving affected pods to another node.
  • Using taints and tolerations to prevent pods from running on nodes with lower resources.

However, if you don’t notice a lack of resources in the node, you will have to check node logs and investigate the pod command to find the issue.

3. Identify container issues 

If you can’t find any issues in the pods, the error might be caused by a container within a pod. Using the wrong image is the main reason for container issues. So, first, you need to find the image name from the DaemonSet manifest and verify that you have used the correct image.

If it is not the case, you will have to log into the node through the command line and investigate if there are any application or configuration issues. You can use the below command to gain access to a Kubernetes cluster node through the command line:

docker run -ti --rm ${image} /bin/bash

4. Use Kubectl commands for troubleshooting

Using Kubectl commands is another excellent approach to debugging Kubernetes DaemonSets. Kubectl is a command line tool provided by Kubernetes to communicate easily with Kubernetes clusters. You can use it to perform any action on a cluster, including deploying apps and managing cluster resources. Most importantly, you can use Kubectl on Windows, macOS, and multiple varieties of Linux.

Here are some of the most popular Kubectl commands you can use to debug DaemonSets:

  • Kubectl describe – Provides detailed information on deployments, services, and pods. When debugging, you can use this command to fetch details on nodes to identify memory and disk space issues.
  • Kubectl logs – Used to display logs from a Kubernetes resource. These logs can be a lifesaver when you need more information to determine an error’s root cause.
  • Kubectl exec – You can execute commands in a running container using this command. You can use this command to view configuration, startup scripts, and permissions when debugging.
  • Kubectl auth – This is another essential command for debugging. It allows you to verify that a selected user or a group can perform a particular action.

Kubectl

5. Invest in an observability platform

Logs are an essential part of application debugging. It is no different for Kubernetes, and you can add logs as you see fit to make the debugging process more straightforward. However, manually adding logs is not an easy task. It takes a lot of time, and there can be human errors.

The best way to add logs to your application is by using a specialized observability tool like Lightrun. Such tools help developers monitor their applications in real-time, identify issues, and quickly fix them. Using a specialized tool makes the debugging process much more efficient and faster. 

Next steps

The five tips we discussed to debug Kubernetes DaemonSet should make the debugging process easier for you. However, debugging DaemonSets is naturally challenging since daemons are placed in the application layer of the workload. It is always more beneficial to use an observability tool like Lightrun to automate some of your work. Lightrun enables you to add logs, metrics, and traces to your Kubernetes clusters and monitor these in real-time while your app is running. You can find more details on how Lightrun works by requesting a demo.

The post Top 5 Debugging Tips for Kubernetes DaemonSet appeared first on Lightrun.

]]>
The SRE’s Quick Guide to Kubectl Logs https://lightrun.com/the-sres-quick-guide-to-kubectl-logs/ Sun, 28 Aug 2022 14:31:14 +0000 https://lightrun.com/?p=7807 Logs are key to monitoring the performance of your applications. Kubernetes offers a command line tool for interacting with the control plane of a Kubernetes cluster called Kubectl. This tool allows debugging, monitoring, and, most importantly, logging capabilities.  There are many great tools for SREs. However, Kubernetes supports Site Reliability Engineering principles through its capacity […]

The post The SRE’s Quick Guide to Kubectl Logs appeared first on Lightrun.

]]>
Logs are key to monitoring the performance of your applications. Kubernetes offers a command line tool for interacting with the control plane of a Kubernetes cluster called Kubectl. This tool allows debugging, monitoring, and, most importantly, logging capabilities. 

There are many great tools for SREs. However, Kubernetes supports Site Reliability Engineering principles through its capacity to standardize the definition, architecture, and orchestration of containerized applications. Moreover, it infrastructures for scalable and reliable, distributed software services. This article will explain how to use the Kubernetes built-in debugging solution, Kubectl. 

What is Kubernetes?

Kubernetes is an open-source platform to manage, scale and automate the deployment of containerized applications. It separates the containers that make up an application into logical units for easy administration.

There are several Kubernetes functionalities:

  1. Application hosting: You can choose which server will host the container and how it launches.
  2. Load balancing: Kubernetes calculates the best location to place containers and thus optimizes performance and availability.
  3. Storage management: To launch apps, Kubernetes mounts and adds your chosen storage system.
  4. Self-healing: Kubernetes will roll back for you if something goes wrong following a change to your application.

Kubernetes Cluster

What is Kubectl? 

Kubectl is a Kubernetes command-line tool. You use it to manage and run your Kubernetes cluster. Although there are a few GUI tools for Kubernetes, the most popular method of communicating with a Kubernetes cluster is through Kubectl. It’s crucial since it enables you to view fundamental use data and read logs from containers.

Kubernetes’ API gives full control over Kubernetes. Each Kubernetes activity is accessible as an API endpoint that is used to operate using an HTTP request. Thus, the primary function of Kubectl is to execute HTTP requests to the Kubernetes API.

The API reference contains the endpoints for all Kubernetes operations’ APIs. The API server’s URL must include before any of the endpoint paths given in the API reference to perform a real request to an endpoint. As a result, Kubectl sends an HTTP POST request to the API URL whenever you run the API. All commands that communicate with the Kubernetes cluster operate similarly to Kubectl. Kubectl merely sends HTTP queries to the relevant Kubernetes API endpoints in each scenario.

Kubernetes consists of several parts that operate as distinct processes on cluster nodes. Each part has a specialized function; some run on the worker nodes while others run on the master nodes. The smallest execution unit in Kubernetes is a pod. Pods may contain one or more apps. 

Some of the core functions of Kubectl include:

  • Getting logs from a container that was previously created.
  • Kubectl running on the node handles requests by reading directly from the log file.
  • Managing the cluster manager and executing commands against the Kubernetes cluster.
  • A centralized state of resources exists internally to manage CRUD operations.
  • Maintaining the health of your cluster and application while complying with industry standards for container orchestration.
  • Maintaining logs on the node if a container restarts.

Kubernetes Kubectl

Logs in Kubernetes 

Logs are helpful for many reasons, including observing and monitoring the performance of our application and keeping tabs on sales, new users, requests, and other developments. We also need them for problem-solving; we review the logs whenever our program fails or something goes wrong. 

To review all the logs in Kubernetes, you can run  kubectl logs [pod_name] 

This log collecting strategy is less than ideal when using Kubernetes since you must gather logs for several pods (applications) across numerous cluster nodes. 

Therefore, the default Kubernetes logging architecture advises recording the standard output (stdout) and standard error output (stderr) to a log file from each container on the node. 

To view the logs for a specific container, you can run kubectl logs ${POD_NAME} ${CONTAINER_NAME}

Pods being deleted and then regenerated is very common in a Kubernetes environment. For instance, Kubernetes’ responsibility includes deleting a container when it crosses the resource limits. However, if your pod occasionally restarts, Kubectl logs will only display the logs from a container that is currently operating. To view the logs from a previously operating container, you can add -previous to the Kubectl logs command. 

Adding -follow is an additional helpful choice, enabling you to stream the logs directly from the active container in real-time. Use this if you want to view what happens in the program while it is active or if you need to live-dub a problem. You may use the command kubectl logs [pod name] -follow to receive a live stream of logs rather than repeatedly running Kubectl logs.

Discovering details about pods 

To check a pod’s status, you can run kubectl get pods

To get more information on a pod, you can run kubectl describe pods pod-name

Discovering details about pods

Discovering details about pods 2
Figure 2: Configuration information about the container(s), Pod and status information of each

There are three container states: Waiting, Running, and Terminated. Depending on the status, additional information appears.

If the container passes its most recent readiness probe, Ready will let you know. If there is no readiness probe, the container is deemed ready.

You can use Restart Count to find how many times the container has been restarted.

The binary Ready condition, which denotes whether the pod can handle requests or not, is the sole condition currently connected with a pod.

The last thing you see is a log of recent pod-related events. The system compresses many identical occurrences by noting the first and last time an event occurred and the frequency of visits. 

Debugging non-running pods

A Pod might not fit on any node. 

To check the pod’s status, you can run kubectl get pods

To check why the pod is not running, you can run kubectl describe pod 

To examine any affected container, you can run kubectl logs ${POD_NAME} ${CONTAINER_NAME}

To examine containers that were previously crashed, you can run kubectl logs –previous ${POD_NAME} ${CONTAINER_NAME} 

Monitoring in Kubernetes 

Even though you can access logs and fix bugs manually using the guidance above, this is time-consuming and may not be the ideal long-term solution. To monitor your Kubernetes application properly and have clear visibility over all the logs, metrics, and traces of your app, you need to invest in a continuous debugging and observability platform. Such platforms enable you to easily make changes, resolve bugs and collect data directly from your app in real-time without troubleshooting.

Traditionally, the development stage includes exceptions, logs, performance metrics, and traces. However, the proliferation of the cloud, microservices, and serverless architectures created a gap in observability between development and production environments. This gap makes it difficult to foresee or duplicate production-only difficulties.

Lightrun is a developer-centric observability platform. It allows you to watch and debug the functioning of your application code by securely adding logs, metrics, and traces in real-time and as needed. There is no need for redeployment, restarts, or hotfixes.

Improve your code-level observability

The kubectl logs, kubectl describe, and kubectl get commands help review, explore, and analyze the status of the logs. However, a robust observability platform enables you to get an easily accessible overview of your cluster, which will prove invaluable in the long term. Get started with Lightrun’s playground to experience our platform’s functionalities in a real, live app. 

The post The SRE’s Quick Guide to Kubectl Logs appeared first on Lightrun.

]]>
Lightrun Releases KoolKits – Debugging Toolkits for Kubernetes https://lightrun.com/koolkits-debugging-toolkits-for-kubernetes/ Mon, 28 Feb 2022 11:36:00 +0000 https://lightrun.com/?p=7054 KoolKits (Kubernetes toolkits) are highly-opinionated, language-specific, batteries-included debug container images for Kubernetes. In practice, they’re what you would’ve installed on your production pods if you were stuck during a tough debug session in an unfamiliar shell. We created a quick, 2-minute explanation of the project if you prefer that to the written word: To briefly […]

The post Lightrun Releases KoolKits – Debugging Toolkits for Kubernetes appeared first on Lightrun.

]]>
KoolKits (Kubernetes toolkits) are highly-opinionated, language-specific, batteries-included debug container images for Kubernetes. In practice, they’re what you would’ve installed on your production pods if you were stuck during a tough debug session in an unfamiliar shell.

We created a quick, 2-minute explanation of the project if you prefer that to the written word:

To briefly give some background, note that these container images are intended for use with the new kubectl debug feature, which spins up Ephemeral containers for interactive troubleshooting. A KoolKit will be pulled by kubectl debug, spun up as a container in your pod, and have the ability to access the same process namespace as your original container.

Since production containers are usually rather bare, using a KoolKit enables you to troubleshoot with power tools instead of relying on what was left behind due to the generosity (or carelessness) of whoever originally built the production image.

The tools in each KoolKit were carefully selected, and you can read more about the motivation behind this entire project below.

If you just want to take a look at the good stuff, feel free to check out the full project on GitHub.

Debugging Kubernetes is Hard

It’s not trivial to understand what’s going on inside a Kubernetes pod.

First of all, your application is not a single entity anymore – it is comprised of multiple pods, replicated for horizontal scaling, and sometimes even scattered across multiple clusters.

Furthermore, to access your application with local tools (like debuggers) you need to deal with pesky networking issues like discovery and port forwarding, which slows down the use of such tools. This is of course can be solved by using a service mesh – but, while the technology is slowly getting traction, it requires an implementation of another layer of abstraction that might make debugging harder, and not easier.

And, the crown jewel of the distributed systems world-altering the state of or completely halting the running pod (e.g. when placing a breakpoint) might cause cascading failures in other parts of your system, which will exacerbate the existing problem.

The Motivation Behind KoolKits

Lightrun was built with Kubernetes in mind – we work across multiple pods, multiple clusters, and even multiple clouds. We understood early on that packing a punch by using the right tools is a great source of power for the troubleshooting developer – and we figured we’d find a way to give back to the community somehow – and that’s how we came up with the idea for KoolKits.

Let’s dive deep for a second to explain why KoolKits can be pretty useful:

There’s a well-known Kubernetes best practice that states that one should build small container images. This makes sense for a few different reasons:

  1. Building the image will consume less resources (aka CI hours)
  2. Pulling the image will take less time (who wants to pay for so much ingress anyways?)
  3. Less stuff means less surface area exposed to security vulnerabilities, in a world where even no-op logging isn’t safe anymore

There’s also a lot of tooling in existence that helps you get there without doing too much heavy lifting:

  1. Alpine Linux base images are super small
  2. DistroLess Docker images go a step further and remove everything but the runtime
  3. Docker multi-stage builds help create thin final production images

The problem starts when you’re trying to debug what’s happening inside those containers. By using a small production image you’re forsaking a large amount of tools that are invaluable when wrapping your head around a problem in your application.

By using a KoolKit, you’re allowing yourself the benefits of a small production image without compromising on quality tools – each KoolKit contains hand-picked tools for the specific runtime it represents, in addition to a more generic set of tooling for Linux-based systems.

P.S. KoolKits was inspired by kubespy and netshoot.

Considerations

There’s quite a few decisions we made during the construction of these images – some things we took into consideration are listed below.

Size of Images

KoolKits Docker images tend to run, uhm, rather large.

KoolKits are intended to be downloaded once, kept in the cluster’s Docker registry, and then spun up immediately on demand as containers. Since they’re not intended for constant pulling, and since they’re intended to be packed with goodies, this is a side effect we’re willing to endure.

Using Ubuntu base images

Part of the reason it’s hard to create a really slim image is due to our decision to go with a full Ubuntu 20.04 system as the basis for each KoolKit. This mainly came from our desire to replicate the same environment you would debug with locally inside your clusters.

For example, this means no messing around with Alpine alternatives to normal Ubuntu packages you’re used to working with. Actually, this means we have a way of including tools that have no Alpine versions in each KoolKit.

Using language version managers

Each KoolKit uses (wherever possible) a language version manager instead of relying on language-specific distros. This is done to allow you to install older runtime versions easily, and in order to allow you to swap between runtime versions at will (for example, to get specific versions of tooling that only exist for specific runtime versions), as need be.


Available KoolKits

Each of the folders in the repo contains the Dockerfile behind the KoolKit and a short explanation of the debug image. All KoolKits are based on the ubuntu:20.04 base image, since real people need real shells.

The list of available KoolKits:

  1. koolkit-jvm – AdoptOpenJDK 17.0.2 & related tooling (including jabba for easy version management and Maven 3.8.4)
  2. koolkit-node – Node 16.13.1 & related tooling (including nvm for easy version management)
  3. koolkit-python – Python 3.10.2 & related tooling (including pyenv for easy version management)

Note that you don’t actually have to build them yourselves – all KoolKits are hosted publicly on Docker Hub and available free of charge.

KoolKits Coming up

Contribution

We’d be more than happy to add tools we missed to any image – just open a pull request or an issue to suggest one.

The post Lightrun Releases KoolKits – Debugging Toolkits for Kubernetes appeared first on Lightrun.

]]>