Dev Tools Archives - Lightrun https://lightrun.com/tag/dev-tools/ Developer Observability Platform Mon, 10 Jul 2023 13:53:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://lightrun.com/wp-content/uploads/2022/11/cropped-fav-1-32x32.png Dev Tools Archives - Lightrun https://lightrun.com/tag/dev-tools/ 32 32 Short and Exciting Journey of M1 Build Agent Configuration https://lightrun.com/short-journey-and-yet-super-exciting-about-configure-m1-build-agent/ Sun, 19 Jun 2022 08:26:03 +0000 https://lightrun.com/?p=7440 Back in November 2020 Apple’s M1 chip was introduced and as the end users moved forward to M1 based Macs it became mandatory to build applications that are compatible with the new technology. The M1 chip has incredible improvements and features but I won’t cover them in this post.There are many resources on the internet […]

The post Short and Exciting Journey of M1 Build Agent Configuration appeared first on Lightrun.

]]>
Back in November 2020 Apple’s M1 chip was introduced and as the end users moved forward to M1 based Macs it became mandatory to build applications that are compatible with the new technology.

The M1 chip has incredible improvements and features but I won’t cover them in this post.There are many resources on the internet covering this and I encourage you to explore them. In this post I will cover several challenges I tackled while setting up an M1 build. I will share tips which might be useful and save you time in the future.

My journey started when I was assigned a task of preparing a native M1 build agent for Lightrun’s Python and Java agents. Not to be confused with a CI agent. Lightrun’s server side offering is an OS native agent that integrates with the language runtime. 

I started with research on the implementation of the M1 build agent and I decided to use Scaleway’s Apple silicon M1 as a service

The onboarding process went pretty smooth and was straight forward. Within several clicks I had an up and running on the M1 MAC mini instance that was ready for use.

The next step was connecting the build agent to the Azure devops pipeline. There’s a wonderful blog that describes the process of Azure pipeline installation as a step-by-step guide for a self hosted  agent on M1. You can find this guide here

I want to bring to your attention the fact that at the time of this writing the azure pipeline agent is NOT supported on the M1 architecture and that is why we need to use Rosetta 2. Rosetta  Lets a Mac with Apple silicon execute apps built for an Intel processor as if they were native applications.  For those who want to stay updated with the Azure Pipeline Agent support for Apple M1 there is an open discussion on github, available here.

The note above is important since you might end up with a pipeline that invokes a bash script and it will utilize Rosetta 2 which in turn produces artifacts that target amd64 architectures and not arm64. It could lead to unintended consequences where you expected to compile the application targeting arm64 and instead the result is amd64. The good news is that you can work around it by adding the “arch -arm64” argument before invoking the script. E.g.:

         - script: |
             arch -arm64 /<PATH>/<TO>/<SCRIPT>/someScript.sh
            displayName: 'running script on M1 build agent'

In case you’re unsure whether your artifact is indeed an arm64 artifact. You can test this using the file command or the activity monitor tool.. Below are examples of both:

  • The following file command shows arm64 output:
file /<PATH-TO-YOUR-FILE>/Object.so 
Object.so: Mach-O 64-bit dynamically linked shared library arm64
  • This file command shows amd64 output:
file /<PATH-TO-YOUR-FILE>/Object.so 
Object.so: Mach-O 64-bit dynamically linked shared library x86_64

When running the activity monitor you can inspect the “Kind” column to see the architecture of the running binary. Intel means amd64 and Apple means arm64:

Activity monitor

At this point I had the impression that the heavy lifting of this project is over. Unfortunately, this was not the case…

Installing python versions

When installing specific versions of Python you should be aware of these tracked Python issues:

https://bugs.python.org/issue41100

https://github.com/python/cpython/pull/22855

The issues briefly discuss the support of macOS 11 and Apple Silicon. The interesting detail is the content of this related message:

https://bugs.python.org/msg382939

Are there plans to backport PR 22855 to any branches older than 3.9?

The plan is to also support 3.8 on Big Sur and Apple Silicon as 3.8 is still in bugfix mode. There are no plans to backport support to 3.7 and 3.6 which are in the security-fix-only phase of the release cycles.

In my case I successfully installed versions – 3.6.15 , 3.7.12 , 3.8.11 , 3.9.6 using pyenv.

As for 3.6 and 3.7 I encountered errors when I tried to install 3.6.14 and 3.7.11 but those errors disappeared on 3.6.15 and 3.7.12.

In order to obfuscate scripts I used PyArmor https://pyarmor.readthedocs.io/en/latest/# . If you are a user of  this tool you should be aware that starting with PyArmor version 7.3.3 onwards a fix was added to circumvent an issue with obfuscated scripts on M1.

TL;DR

To recap, my overall experience of making a build agent from an M1 machine was fascinating and it’s just the beginning. The most impactful recommendation I can offer is to pay attention to the output and make sure your binaries match expectations (arm64/amd64). I would also like to highlight the challenges of Apple’s security restrictions.

If you had similar experiences I’d love to hear from you.

If you like reading this blog and you want to get familiar of what we are doing in Lightrun in the context of changing the future of connect developers to their live applications you more then welcome visit our blogs here – https://lightrun.com/blog/

The post Short and Exciting Journey of M1 Build Agent Configuration appeared first on Lightrun.

]]>
The Complete Kubectl Cheat Sheet [PDF download] https://lightrun.com/kubectl-cheat-sheet/ Tue, 27 Sep 2022 15:14:01 +0000 https://lightrun.com/?p=8037 Kubernetes is one of the most well-known open-source systems for automating and scaling containerized applications. Usually, you declare the state of the desired environment, and the system will work to keep that state stable. To make changes “on the fly,” you must engage with the Kubernetes API.  This is exactly where the Kubernetes command-line tool, […]

The post The Complete Kubectl Cheat Sheet [PDF download] appeared first on Lightrun.

]]>
Kubernetes is one of the most well-known open-source systems for automating and scaling containerized applications. Usually, you declare the state of the desired environment, and the system will work to keep that state stable. To make changes “on the fly,” you must engage with the Kubernetes API. 

This is exactly where the Kubernetes command-line tool, Kubectl, comes in. Whether you’re new to kubectl and want to learn more, or you’ve been working with it for years, this cheat sheet is exactly what you need to start sending commands to your Kubernetes clusters. This article will cover all the essential Kubectl concepts and commands. We recommend you have the PDF cheat sheet version on hand when your application misbehaves and you need a quick reference guide to help you sort it out. 

What is Kubectl?

Kubectl is the Kubernetes command-line tool. It allows developers to communicate with a Kubernetes cluster’s control pane. You can inspect and manage cluster resources, deploy applications, and view logs with it. Of course, to use the tool, you’ll need to install it first. You can install Kubectl on Mac, Linux, and Windows. It’s important to use a Kubectl version within one minor version difference from your cluster. The Kubernetes install tools docs have all the instructions you’ll need for your preferred work environment.

What is kubectl?

Kubectl commands list

Below is a list of all the relevant commands you can use in Kubectl, separated by function. 

Kubectl objects

Kubernetes objects are persistent entities in the Kubernetes system. These entities are used to represent the state of your cluster. It can be considered a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating it, you’re effectively telling the Kubernetes system what your cluster’s desired state looks like.

Create multiple YAML objects from stdin

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
 name: busybox-rest
Spec:
 Containers:
 - name: busybox
   image: busybox:1.28
   Args:
   - rest
   - "1000000"
—
apiVersion: v1
kind: Pod
Metadata:
 name: busybox-rest-less
Spec:
 Containers:
 - name: busybox
   image: busybox:1.28
   Args:
   - rest
   - "1000"
EOF  Create a secret with several keys
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
Metadata:
  name: origsecret
type: Opaque
Data:
  password: $(echo -n "f44lar7" | base64 -w0)
  username: $(echo -n "john" | base64 -w0)
EOF


Resources

Kubectl allows you to create, update, patch, edit, scale, and delete resources. You can also use the interface to look for and view information about various resources. In this context, a resource is an endpoint in the Kubernetes API. If you aim to work with multiple resources, it might be easier to list them all in a new manifest file – a YAML or JSON file, and use kubectl as a bridge between your new manifest and the Kubernetes API.

Viewing and finding resources

List all services in the namespace

kubectl get services List all pods in all namespaces 
kubectl get pods --all-namespaces

List all pods in the current namespace, with additional details

kubectl get pods -o wide

List a particular deployment

kubectl get deployment dep-one

List all pods in the namespace

kubectl get pods

Get a pod’s manifest YAML

kubectl get pod pod-one -o yaml

Describe pods/nodes with verbose output\

kubectl describe nodes my-node
kubectl describe pods my-pod

List services sorted by name

kubectl get services --sort-by=.metadata.name

List pods sorted by restart count

kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

List PersistentVolumes sorted by capacity

kubectl get pv --sort-by=.spec.capacity.storage

Get the version label of all pods with the label app=derollo

kubectl get pods --selector=app=derollo -o \ jsonpath='{.items[*].metadata.labels.version}'

Retrieve the value of a key with dots, e.g. ‘ca.crt’

kubectl get configmap myconfig \
  -o jsonpath='{.data.ca\.crt}'

Retrieve a base64 encoded value with dashes instead of underscores

kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'

Get all worker nodes (use a selector to exclude results that have a label named ‘node-role.kubernetes.io/control-sheet’)

kubectl get node --selector='!node-role.kubernetes.io/control-sheet'

Get all running pods in the namespace

kubectl get pods --field-selector=status.phase=Running

Get ExternalIPs of all nodes

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

List Names of Pods that belong to Particular RC “jq” command useful for transformations that are too complex for jsonpath; it can be found at https://stedolan.github.io/jq/.

sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

Show labels for all pods (or any other Kubernetes object that supports labeling)

kubectl get pods --show-labels

Check which nodes are ready

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
 && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

Output decoded secrets without external tools

kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'

List all Secrets currently in use by a pod

kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

List all containerIDs of initContainer of all pods. It can be helpful when cleaning up stopped containers or avoiding removing initContainers.

kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3

List Events sorted by timestamp

kubectl get events --sort-by=.metadata.creationTimestamp

Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.

kubectl diff -f ./my-manifest.yaml

Produce a period-delimited tree of all keys returned for nodes. It can be helpful when locating a key within a complex nested JSON structure.

kubectl get nodes -o json | jq -c 'paths|join(".")'

Produce a period-delimited tree of all keys returned for pods, etc.

kubectl get pods -o json | jq -c 'paths|join(".")'

Produce ENV for all pods, assuming you have a default container for the pods, default namespace, and the `env` command is supported. It’s helpful when running any supported command across all pods, not just `env`

for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done

Get a deployment’s status subresource

kubectl get deployment nginx-deployment --subresource=status

Creating resources

Create from a single file:

kubectl apply -f ./my-manifest.yaml

Create from multiple files:

kubectl apply -f ./my1.yaml -f ./my2.yaml

Create resources in all manifest files in dir:

kubectl apply -f ./dir

Create resources from url:

kubectl apply -f https://git.io/vPieo

Updating resources:

Roll the update “abc” containers of “frontend” deployment, updating the image:

kubectl set image deployment/frontend abc=image:v2

Rollback to the previous deployment:

kubectl rollout undo deployment/frontend

Rollback to a specific revision:

kubectl rollout undo deployment/frontend --to-revision=3 

Watch the rolling update status of “backend” deployment until completion:

kubectl rollout status -w deployment/backend 

Rollout and restart of the “backend” deployment:

kubectl rollout restart deployment/frontend

Force replace, delete, and then re-create the resource. Note that this command may cause a service outage.

kubectl replace --force -f ./pod.json

Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 3100:

kubectl expose rc nginx --port=80 --target-port=3100

Update a single-container pod’s image version (tag) to v5:

kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v5/' | kubectl replace -f -

Add a label (‘timely’) to the pod my-pod:

kubectl label pods my-pod new-label=timely

Add an annotation to the pod my-pod:

kubectl annotate pods my-pod icon-url=http://goo.g3/XCMGh

Autoscale a deployment named “ipsum:”

kubectl autoscale deployment ipsumn --min=2 --max=10

Patching resources

Partially update a node:

kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

Update a container’s image. You are required to use the spec.containers.name since it’s a merge key:

kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'

Update a container’s image using a JSON patch with arrays:

kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'

Deploy ludicrousPatch using a JSON patch with positional arrays:

kubectl patch deployment valid-deployment  --type json   -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/ludicrousPatch"}]'

Adda new element to a positional array:

kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'

Editing resources

Edit the service named “service-registry:”

kubectl edit svc/service-registry

Use an alternative editor:

KUBE_EDITOR="nano" kubectl edit svc/service-registry

Scaling resources

Scale replica set named ‘ipsum’ to 3:

kubectl scale --replicas=3 rs/ipsum

Scale a resource specified in “ipsum.yaml” to 3:

kubectl scale --replicas=3 -f ipsum.yaml

Scale mysql to 5 (when the deployment named mysql’s current size is 2):

kubectl scale --current-replicas=2 --replicas=5 deployment/mysql

Scale multiple replication controllers:

kubectl scale --replicas=5 rc/ipsum rc/lor rc/bac

Deleting resources

Delete a pod using the type and name specified in delpod.json:

kubectl delete -f ./delpod.json 

Delete a pod immediately:

kubectl delete pod unwanted -now

Delete pods and services with the same names “bak” and “far:”

kubectl delete pod, service bak for 

Delete pods and services with label name=delLabel:

kubectl delete pods, services -l name=delLabel

Delete all pods and services in namespace ns-del:

kubectl -n ns-del delete pod, svc --all 

Delete all pods matching the awk pattern3 or pattern5:

kubectl get pods  -n mynamespace --no-headers=true | awk '/pattern3|pattern5/{print $1}' | xargs  kubectl delete -n mynamespace pod

Kubectl get is probably the most helpful command in collecting information. It allows you to retrieve information about all Kubernetes objects and nodes in the Kubernetes data plane. The most common objects you are likely to query are pods, services, deployments, stateful sets, and secrets. 

The get command offers a range of possible output formats:

-o wide is like verbose; that is, it adds more information, which is dependent on the type of objects being queried.

-o yaml and -o json output the complete current state of the object and likely include more information than the original manifest files.

-o jsonpath allows you to select the information you want from the full JSON of the -o json option using the jsonpath notation.

-o go-template allows you to apply Go templates for more advanced features. Feel free to skip this one if you’re not fluent in Golang.

Here are some examples:

List all pods in the default namespace:

kubectl get pod

Get more information about a given pod:

kubectl -n mynamespace get po mypod-0 -o wide

Get the full state in YAML of a given pod:

kubectl -n mynamespace get pods/mypod -o yaml

Get the services in the default namespace:

kubectl get svc

Get the value of a secret:

kubectl -n mynamespace get secrets MYSECRET \
    -o 'jsonpath={.data.DB_PASSWORD}' | base64 -d

Get the logs from a container:

kubectl logs mypod-0 -c myapp

Display endpoint information about the master and services in the cluster:

kubectl cluster-info

Display the Kubernetes version running on the client and server:

kubectl version

View the cluster configuration:

kubectl config view

List available API resources:

kubectl api-resources

List everything for all namespaces:

kubectl get all --all-namespaces

DaemonSet 

A DaemonSet ensures that Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

  • Cluster storage daemon that can be run on every node
  • Logs collection daemon that can be run on every node
  • Node monitoring daemon that can be run on every node

DemonSet

You can use a single DaemonSet to cover all use cases for all nodes or multiple sets, one for each type of daemon with different optional flags and different memory and cpu requests.

You can use shortcode ds to denote a DaemonSet

Shortcode = ds

List one or more daemonSets:

kubectl get daemonset

Edit and update the definition of one or more daemonSet:

kubectl edit daemonset <daemonset_name>

Delete a daemonSet

kubectl delete daemonset <daemonset_name>

Create a new daemonSet

kubectl create daemonset <daemonset_name>

Manage the rollout of a daemonSet

kubectl rollout daemonset

Display the detailed state of daemonSets within a namespace

kubectl describe ds <daemonset_name> -n <namespace_name>

Deployments

A deployment runs multiple copies of your application and automatically replaces any failed or unresponsive instances. The Kubernetes Deployment Controller manages deployments. The controller ensures that user requests are served through one or more instances of your application.

You can use shortcode deploy to denote deployment

Shortcode = deploy

List one or more deployments

kubectl get deployment

Display the detailed state of one or more deployments

kubectl describe deployment <deployment_name>

Edit and update the definition of one or more deployments on the server

kubectl edit deployment <deployment_name>

Create a new deployment

kubectl create deployment <deployment_name>

Delete deployments

kubectl delete deployment <deployment_name>

See the rollout status of a deployment

kubectl rollout status deployment <deployment_name>

Namespaces

In Kubernetes, namespaces enable exact selection for groups of resources within a single cluster. Resource names must be unique within a single namespace but not across multiple namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g., Deployments, Services, etc.) and not for cluster-wide objects (e.g., StorageClass, Nodes, PersistentVolumes, etc.).

You can use shortcode ns to denote namespace

Shortcode = ns

Create a namespace:

kubectl create namespace <namespace_name>

List one or more namespaces:

kubectl get namespace <namespace_name>

Display the detailed state of one or more namespace:

kubectl describe namespace <namespace_name>

Delete a namespace:

kubectl delete namespace <namespace_name>

Edit and update a namespace definition:

kubectl edit namespace <namespace_name>

Display all the resources used by a namespace:

kubectl top namespace <namespace_name>

Events 

A Kubernetes event is an object in the framework generated automatically in response to changes in other resources—like nodes, pods, or containers.

Kubernetes events can help you understand how Kubernetes resource decisions are made and so can be helpful in debugging. You can think of events like the breadcrumbs of Kubernetes.

Kubernetes Cluster Events

You can use shortcode ev to denote events.

Shortcode = ev

List all recent events for all system resources:

kubectl get events

List all events of type warning only:

kubectl get events --field-selector type=Warning

List all events (excluding Pod events):

kubectl get events --field-selector involvedObject.kind!=Pod

Pull all events for a single node with a specific name:

kubectl get events --field-selector involvedObject.kind=Node, involvedObject.name=<node_name>

Filter out normal events from a list of events:

kubectl get events --field-selector type!=Normal

Logs

System component logs record events in a cluster, which is helpful for debugging. Since logs are constantly updated, this will only display the latest logs. In a production environment, it’s recommended to use a log aggregator and do your searches and filtering through it.

There can be two types of logs, fine-grained (more details) and coarse-grained (fewer details). Coarse-grained logs represent errors within a component, while fine-grained logs represent step-by-step traces of events.

Print logs for a specific pod:

kubectl logs <pod_name>

Print the logs for the last hour for a pod:

kubectl logs --since=1h <pod_name>

Retrieve the most recent 20 lines of logs:

kubectl logs --tail=20 <pod_name>

Retrieve the logs from a service. Optionally you can select which container:

kubectl logs -f <service_name> [-c <$container>]

Print the logs for a pod:

kubectl logs -f <pod_name>

Print the logs for a container in a pod:

kubectl logs -c <container_name> <pod_name>

Get the output of the logs for a pod into a file named ‘pod.log:’

kubectl logs <pod_name> pod.log

Check the logs for a previously failed pod

kubectl logs --previous <pod_name>

ReplicaSets

RepliceSets ensure you have a stable set of replica pods operating as you have defined in the deployment file. You might use a ReplicaSet to confirm that identical pods are available. 

You can use shortcode rs to denote ReplicaSets.

Shortcode = rs

List all the ReplicaSets:

kubectl get replicasets

Show the detailed state of one or more ReplicaSets:

kubectl describe replicasets <replicaset_name>

Scale a ReplicaSet to x replicas instead of the current amount:

kubectl scale --replicas=[x]

Secrets 

A secret is an object containing some sensitive data like as a password, a token, or a key. This information is stored in a Pod specification or container image. Using a Secret prevents you from including confidential or sensitive information in your application code.

Create a new Secret:

kubectl create secret

List all Secrets:

kubectl get secrets

List all the required details about Secrets:

kubectl describe secrets

Delete a Secret:

kubectl delete secret <secret_name>

Helm

Helm is a Kubernetes deployment tool for automating the creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. All the following commands assume a Helm-deployed application.

Get details about the current release:

helm list

Get details about the release in all namespaces:

helm list --all-namespaces

Get details about the release in a specific namespace:

helm list --namespace jenkins

Get the values used in a specific application:

helm get values jenkins -n jenkins

Get all the information used in a specific application:

helm get all jenkins -n jenkins

Services

Services are an abstract way to expose an application running on a set of Pods as a network service.

Kubernetes assigns each pod with a unique IP address, and a single DNS can manage all the load across multiple pods. This allows you to treat the various pods as a sort of cloud-based black box containing the desired service.

You can use shortcode svc to denote Services.

Shortcode = svc

List one or more services:

kubectl get services

Show the detailed state of all services:

kubectl describe services

Expose a replication controller, service, deployment, or pod as a new Kubernetes service:

kubectl expose deployment [deployment_name]

Edit and update the definition of one or more services:

kubectl edit services

StatefulSet

StatefulSets represent a set of pods with unique, persistent identities and stable hostnames that GKE (Google Kubernetes Engine) maintains regardless of where they are scheduled. You can think of them like site URLs – they’ll (almost) always be there when you come to visit. The persistent disk storage associated with the StatefulSet is responsible for storing state information and other resilient data for the given StatefulSet pod.

You can use shortcode sts to denote StatefulSet.

Shortcode = sts

List a StatefulSet:

kubectl get statefulset

Delete StatefulSet only (not pods):

kubectl delete statefulset/[stateful_set_name] --cascade=false


In a nutshell

We’ve covered all the important actions you can take using Kubectl, including how to check your pods and clusters, create new objects, handle resources, and gather and display information. You can visit and revisit this cheat sheet whenever you need a little help. 

Managing your app and ensuring it runs smoothly can be time-consuming, especially if you don’t use an observability platform. Lightrun enables you to add logs, metrics, and traces to your app in real time while the app is running. Spend your time coding, not debugging. Request a demo to see how Lightrun works.

The post The Complete Kubectl Cheat Sheet [PDF download] appeared first on Lightrun.

]]>
Top 5 Debugging Tips for Kubernetes DaemonSet https://lightrun.com/top-5-debugging-tips-for-kubernetes-daemonset/ Tue, 23 Aug 2022 17:11:33 +0000 https://lightrun.com/?p=7676 Kubernetes is the most popular container orchestration tool for cloud-based web development. According to Statista, more than 50% of organizations used Kubernetes in 2021. This may not surprise you, as the orchestration tool provides some fantastic features to attract developers. DaemonSet is one of the highlighted features of Kubernetes, and it helps developers to improve […]

The post Top 5 Debugging Tips for Kubernetes DaemonSet appeared first on Lightrun.

]]>
Kubernetes is the most popular container orchestration tool for cloud-based web development. According to Statista, more than 50% of organizations used Kubernetes in 2021. This may not surprise you, as the orchestration tool provides some fantastic features to attract developers. DaemonSet is one of the highlighted features of Kubernetes, and it helps developers to improve cluster performance and reliability. Although it is widely used, debugging DaemonSet can be challenging since it is in the application layer. So, this article will discuss five essential tips to help you debug Kubernetes DaemonSet.

What is a Kubernetes DaemonSet? 

Kubernetes DaemonSet is a Kubernetes object that ensures all nodes (or selected subset) in a cluster run a single copy of a pod.

When you add new nodes to a cluster, the DaemonSet controller automatically adds a pod to that node. Similarly, pods will be erased when a node is deleted from the cluster.

Most importantly, DaemonSet improves the performance and reliability of your Kubernetes cluster while distributing tasks across all nodes. Some developers argue that we do not need to consider where pods run on a Kubernetes cluster. But DaemonSet is efficient for long-running services like log collection, node monitoring, and cluster storage. Also, you can create multiple DaemonSets for a single type of daemon using different flags, memory capacities, and CPU requests.

DaemonSet Pods

Taints and tolerations for DaemonSet

Taints and tolerations are used together to stop pods from scheduling onto inappropriate nodes. You can apply one or more taints to a node, and it will not allow the node to accept pods that do not tolerate the taints. On the other hand, tolerations enable the scheduler to find nodes with matching taints and schedule pods on them. However, using tolerations does not ensure scheduling.

Top 5 debugging tips for Kubernetes DaemonSet 

Now that we have a broad understanding of Kubernetes DaemonSet, let’s discuss a few tips you can use to ease the Kubernetes DaemonSet debugging process.

1. Find unhealthy pods

A DaemonSet is considered unhealthy when it does not have one pod running in each node. Unhealthy DaemonSets are caused mainly by pending pods or pods stuck in a crash loop.

You can easily find unhealthy pods in a Kubernetes cluster by listing all the available pods. The below command will list all the pods in the cluster with their statuses.

kubectl get pod -l app=[label]

You can identify the unhealthy pods from their status once they are listed. Pods withcrashloopbackoff, pending, and evicted statuses are considered unhealthy. Once you identify the unhealthy pods, you can use the below commands to get more details and logs on the pod.

// Get more information about the pod
kubectl describe pod [pod-name]

// Get pod logs
kubectl logs [pod-name]

Finally, you can use the pod information and logs to determine the issue in the DaemonSet. This approach saves you a lot of time since you do not need to debug all the pods in the cluster to find the problem. You can prioritize the unhealthy pods first.

May the pods ever be in your favor meme

2. Resolve the nodes that don’t have enough resources 

As mentioned, pods with crashloopbackoff status are considered unhealthy pods. Mainly, this error is caused by a lack of resources available to run the pod. You can follow the below steps to troubleshoot pods quickly with crashloopbackoff status.

First, you need to find the node that runs the unhealthy pod:

kubectl get pod [pod-name] -o wide Then, you can use the node name from the above command result to monitor the available node resources:
kubectl top node [node-name]

If you notice a lack of resources in the node, you can resolve it by:

  • Decreasing the memory and CPU of the DaemonSet.
  • Upgrading nodes to accommodate more pods.
  • Moving affected pods to another node.
  • Using taints and tolerations to prevent pods from running on nodes with lower resources.

However, if you don’t notice a lack of resources in the node, you will have to check node logs and investigate the pod command to find the issue.

3. Identify container issues 

If you can’t find any issues in the pods, the error might be caused by a container within a pod. Using the wrong image is the main reason for container issues. So, first, you need to find the image name from the DaemonSet manifest and verify that you have used the correct image.

If it is not the case, you will have to log into the node through the command line and investigate if there are any application or configuration issues. You can use the below command to gain access to a Kubernetes cluster node through the command line:

docker run -ti --rm ${image} /bin/bash

4. Use Kubectl commands for troubleshooting

Using Kubectl commands is another excellent approach to debugging Kubernetes DaemonSets. Kubectl is a command line tool provided by Kubernetes to communicate easily with Kubernetes clusters. You can use it to perform any action on a cluster, including deploying apps and managing cluster resources. Most importantly, you can use Kubectl on Windows, macOS, and multiple varieties of Linux.

Here are some of the most popular Kubectl commands you can use to debug DaemonSets:

  • Kubectl describe – Provides detailed information on deployments, services, and pods. When debugging, you can use this command to fetch details on nodes to identify memory and disk space issues.
  • Kubectl logs – Used to display logs from a Kubernetes resource. These logs can be a lifesaver when you need more information to determine an error’s root cause.
  • Kubectl exec – You can execute commands in a running container using this command. You can use this command to view configuration, startup scripts, and permissions when debugging.
  • Kubectl auth – This is another essential command for debugging. It allows you to verify that a selected user or a group can perform a particular action.

Kubectl

5. Invest in an observability platform

Logs are an essential part of application debugging. It is no different for Kubernetes, and you can add logs as you see fit to make the debugging process more straightforward. However, manually adding logs is not an easy task. It takes a lot of time, and there can be human errors.

The best way to add logs to your application is by using a specialized observability tool like Lightrun. Such tools help developers monitor their applications in real-time, identify issues, and quickly fix them. Using a specialized tool makes the debugging process much more efficient and faster. 

Next steps

The five tips we discussed to debug Kubernetes DaemonSet should make the debugging process easier for you. However, debugging DaemonSets is naturally challenging since daemons are placed in the application layer of the workload. It is always more beneficial to use an observability tool like Lightrun to automate some of your work. Lightrun enables you to add logs, metrics, and traces to your Kubernetes clusters and monitor these in real-time while your app is running. You can find more details on how Lightrun works by requesting a demo.

The post Top 5 Debugging Tips for Kubernetes DaemonSet appeared first on Lightrun.

]]>
Top 10 Java Linters https://lightrun.com/top-10-java-linters/ Tue, 06 Jul 2021 13:29:29 +0000 https://lightrun.com/?p=6081 Java Linters make an awesome addition to your development environment. Check out our top Java linters and SAST solutions.

The post Top 10 Java Linters appeared first on Lightrun.

]]>
If you want to ensure code maintainability over the long term, you should follow best coding practices and style guide rules. One of the best ways to achieve this, while also potentially finding bugs and other issues with your code, is to use a linter.

Linters are best described as static code analyzers because they check your code before it even runs. They can work inside your IDE, run as part of your build process, or be inserted into your workflow anywhere in between. While the use cases for linters can be rather varied, their utility usually focuses on code cleanup and standardization. In other words, using a linter helps make your code less sloppy and more maintainable.

Check out the below example for a demonstration of how a linter works, from Checkstyle:

Before:

public abstract class Plant {

  private String roots;

  private String trunk;

  protected void validate() {

    if (roots == null) throw new IllegalArgumentException("No roots!");

    if (trunk == null) throw new IllegalArgumentException("No trunk!");

  }

  public abstract void grow();

}

public class Tree extends Plant {

  private List leaves;

  @Overrides

  protected void validate() {

    super.validate();

    if (leaves == null) throw new IllegalArgumentException("No leaves!");

  }

  public void grow() {

    validate();

  }

}

After:

public abstract class Plant {

  private String roots;

  private String trunk;

  private void validate() {

    if (roots == null) throw new IllegalArgumentException("No roots!");

    if (trunk == null) throw new IllegalArgumentException("No trunk!");

    validateEx();

  }

  protected void validateEx() { }

  public abstract void grow();

}

In this article, I’ll examine ten of the best linters for Java. You’ll find that while most linters aren’t “better” or “worse” than others, there are certainly some that come with a wider breadth of features, making them more powerful or flexible than some of their niche counterparts.

Ultimately, it’s best to choose a linter that works best for your specific business use case and workflow.

1. Checkstyle

checkstyle

Checkstyle is one of the most popular linters available. With this popularity comes regular updates, thorough documentation, and ample community support. Checkstyle works natively with Ant and CLI. It is also available as a plugin for a wide variety of IDE’s and toolsets, including Eclipse, Codacy, Maven, and Gradle – although these plugins are managed by third parties, so there’s no guarantee of long-term support.

Checkstyle comes with pre-made config files that support both Sun Code Conventions and Google Java Style, but because these files are XML, they are highly configurable to support your workflow and production needs.

It is also worth mentioning that a project with Checkstyle built into its build process will fail to build even if minor errors are present. This might be a problem if you’re only looking to catch larger errors and don’t have the resources to fix tiny errors that don’t have a perceptible impact.

2. Lightrun

The second member of this list is not actually a linter per se, but it will help you improve your code quality and prevent bugs before they become serious problems. Whereas everything to this point has been a static code analyzer, Lightrun is a runtime debugger. At the end of the day, static code analysis and linting can only get you so far, so if you need a little more, Lightrun is worth adding to your workflow.

Production is the ultimate stress test for any codebase, especially in the age of cloud computing. Lightrun allows you to insert logs, metrics, and snapshots into your code, even at runtime, directly from your IDE or CLI. Lightrun lets you debug issues in production and run data analysis on your code without slowing down or interrupting your application.

3. PMD

pmd

What do the PMD initials stand for? It seems even the developers don’t know…

Like Checkstyle, PMD is a popular static code analyzer with an emphasis on Java. Unlike Checkstyle, PMD supports multi-language analysis, including JavaScript, Salesforce.com, Apex, and Visualforce. This could be helpful if you’d like to use a single linter for a frontend and backend codebase.

In the developers’ own words, “[PMD] finds common programming flaws like unused variables, empty catch blocks, unnecessary object creation, and so forth.” In addition, it comes with a copy-paste-detector (CPD) to find duplicated code in a myriad of languages, so it is easier to find code that could be refactored.

4. Uncrustify

uncrustify

Uncrustify diverges from the previous linters in that Java is not its primary focus. Instead, it is a “code beautifier” made for C and C-like languages, including Java. On the one hand, Uncrustify is great for projects with a C-based or C-analogue-based workflow. On the other hand, its feature list begins and ends with simply making your code look nicer.

Uncrustify works by running through your code and automatically updating its white space, bracketing, and other formatting conventions to match a ruleset. Because this is an automated process, the developers themselves caution against running Uncrustify on an entire project without checking the changes afterward.

Uncrustify is best used in conjunction with other linters and dev tools. It isn’t particularly powerful on its own but could come in handy for niche workflows that involve multiple C-based languages.

5. Error Prone

Error Prone

Error Prone is an error finder for your code builds, specifically built for Java. It is designed to supplement your compiler’s static type checker, and find potential runtime errors without running the code. The example provided on their website seems trivial, especially to any developers who’ve been working out of an IDE for most of their careers.

But for codebases where the compile and run process can stretch into hours or even days, having that extra check can save a lot of time and headaches, especially if a particular bug might be in an uncommonly accessed block of code.

6. Tattletale

Tattletale

Tattletale might not be considered a linter in the traditional sense. While it does analyze your static code like the other linters on this list, it is less concerned with individual blocks of code or particular development standards, and it is more focused on finding package and library redundancies in your project.

Not only will Tattletale identify different dependencies within your JAR files, but it will also suss out duplicate JAR files, find duplicate or missing classes, and check similar JARs with different version numbers. Long-term, not only will this keep your project size slimmer, but it will also help prevent head-scratching errors where you’re calling two different versions of the same package and getting different results because of changes between versions. All of this information is put into an HTML report for easy viewing.

Because of the high-level intention of this tool, it won’t help much with line-to-line code edits. But with that said, if you’re running a purely Java codebase, Tattletale is a tool worth adding to your arsenal.

7. UCDetector

ucdetector

UCDetector, short for Unnecessary Code Detector, does exactly what its name implies. In addition to finding “dead” code, it also marks classes whose privacy modifier could be changed from public to something more restricted and methods or fields which can be set to final.

One of the earliest OOP concepts taught in school is that programmers should only set classes, methods, and data fields to public if they explicitly know those elements will be accessed or modified by external classes. However, when in the thick of coding, even with the best-defined UMLs, it can sometimes be difficult to determine when a class or method should be public, private, or protected.

After your code is completed and debugged, a pass through the UCDetector will help you catch any code blocks you missed or mistakenly set to the wrong privacy modifier, potentially saving you headaches down the road and preventing sensitive field members from being unintentionally exposed to clients.

8. linter for Scala

Linter for Scala

So, let’s say that you’re looking to move on from Java. You want something familiar and that will integrate well with your current back end. Instead of going with C#, you decide to upgrade to Scala. Not coincidentally, the more niche a language is, the more difficult it will be to find Linting tools for it. That’s not saying that Scala is terribly niche, just that it can be a little more difficult to find support for it than for vanilla Java.

With that said, a nice starting point would be this toolset, simply titled: linter Compiler Plugin. According to the Github page, “Linter is a Scala static analysis compiler plugin which adds compile-time checks for various possible bugs, inefficiencies, and style problems.” Not only is it written for Scala, but it is also written almost exclusively in Scala.

Unfortunately, the last commit on the project was in 2016, so it might not be well-equipped for any new features introduced to the language in the past five years.

9. Scalastyle

scalastyle

The developers of Scalastyle describe it thusly: “Scalastyle examines your Scala code and indicates potential problems with it. If you have come across Checkstyle for Java, then you’ll have a good idea of what Scalastyle is. Except that it’s for Scala obviously.”

So for those of you who loved Checkstyle but are moving to a Scala workflow, rejoice, for Scalastyle is here. It’s kept up to date and more likely a better option than linter for Scala above.

10. Coala

coala

Of all the linters on this list, Coala seems to aim for the most flexibility. It claims that it works by, “linting and fixing code for all languages.” While Fortran is located nowhere on their list of supported languages, Coala does support quite an extensive list, including (of course) Java.

All of these languages can be linted using a single config file, so if you’re working in a multi-language web environment (is there any other kind of web environment?) you’ll find Coala is well-suited to your needs.

Final Thoughts

Linters make an awesome addition to just about any Java development environment. Debugging, error-checking, and issue prevention are all multi-step procedures that should take place in every phase of development. A linter is a great tool for when you want to analyze code without having to run it, maintain coding best practices, and ensure long-term code maintainability. However, each linter’s usefulness only extends so far and should thus be supplemented by other tools, including runtime debugging software like Lightrun.

The post Top 10 Java Linters appeared first on Lightrun.

]]>
Top 8 VS Code Python Extensions https://lightrun.com/vscode-python-extensions/ Thu, 23 Jun 2022 04:21:55 +0000 https://lightrun.com/?p=7407 Visual Studio Code (a.k.a. VS Code or VScode) is an open-source and cross-platform source code editor. It was ranked the most popular development tool in the Stack Overflow 2021 Developer Survey, with 70% of the respondents using it as their primary editor. VS Code allows you to use a few programming languages like JavaScript and […]

The post Top 8 VS Code Python Extensions appeared first on Lightrun.

]]>
Visual Studio Code (a.k.a. VS Code or VScode) is an open-source and cross-platform source code editor. It was ranked the most popular development tool in the Stack Overflow 2021 Developer Survey, with 70% of the respondents using it as their primary editor. VS Code allows you to use a few programming languages like JavaScript and TypeScript. Still, you need VS Code extensions if you want to use any other programming language or take advantage of extra tools to improve your code.

Python is one of the top computer languages used by developers worldwide for creating a variety of programs, from simple to scientific applications. But VS Code does not directly support Python. Therefore, if you want to use Python in VS Code, it is important to add good Python extensions. Luckily, there are many options available. However, the biggest challenge is to find the most complete and suitable extensions for your requirements.

Top 8 VS Code Python Extensions

To simplify your search for the most suitable Python extensions for your needs, we put together a list of the top 8 VS Code Python extensions available on the market:

1. Python (Microsoft)

Python VSCode extensions: the official Python extension

Python VS Code extension developed by Microsoft is a feature-rich Python extension that is completely free. VS Code will automatically suggest this extension when you start to create a .py file. Its IntelliSense feature enables useful functionality like code auto-completion, navigation, and syntax checking.

When you install it, the Python VS Code extension will automatically add the Pylance extension, which gives you rich language support, and the Jupyter extension for using Jupyter notebooks. To run tests, you can also use unittest or pytest through its Test Explorer feature. Other valuable capabilities include code debugging, formatting, refactoring, and automatic switching between various Python environments.

2. Lightrun

Python VSCode extensions: Lightrun

Lightrun is a real-time debugging platform that supports applications written in several languages, including Python, and is available as a VS Code extension. It consists of an intuitive interface for you to add logs, traces, and metrics in real-time for debugging code in production. You can add Lightrun snapshots to explore the stack trace and variables without stopping your live application.

Also, you can add real-time performance metrics to measure your code’s performance and synchronization, which will allow you to find performance bottlenecks in your applications. Lightrun supports multi-instance applications such as microservices and big data workers with a tagging mechanism. Lightrun it a commercial product, but it comes with a 14-day free trial.

3. Python Preview

Python VSCode extensions: Python Preview

This VS Code extension helps understand and debug Python code faster by visualizing code execution with animations and graphics. It previews object allocation and stack frames side-by-side with your code editor, even before you start debugging. This Python extension is also 100% free to use.

4. Better Comments

Python VSCode extensions: Better Comments

Comments are critical for any code as they help the developers understand the code better. The Better Comments Python extension is slightly different than the others. It focuses solely on making more human-friendly and readable comments for your Python code. With this extension, you can organize your annotations and improve code clarity. You can use several categories and colors to categorize your annotations—for example, Alerts, Queries, TODOs, and Highlights.

You can also mark commented-out code in a different style and set various comment settings as you see fit. This free VS Code extension also supports many other programming languages.

5. Python Test Explorer

Python VSCode extensions: Python Test Explorer

When developing an application, testing is a must to maintain code quality, and you will have to use different types of test frameworks. The Python Test Explorer extension for VS Code lets you run Unittest, Pytest, or Testplan tests.

The Test Explorer extension will show you a complete view of tests and test suites along with their state in VS Code’s sidebar. You can easily see which tests are failing and focus on fixing them.

In addition, this VS Code extension supports convenient error reporting. It will indicate tests having errors, and you can see the complete error message by clicking on them. If you are working with multiple project folders in VS Code, it enables you to run tests on such multi-root workspaces.

6. Python Indent

Python VSCode extensions: Python Indent

Having the correct indentation is vital when developing in Python, and adding closing brackets can sometimes get cumbersome. The Python Indent extension helps you maintain proper Python indentation in VS Code. This extension adds closing brackets automatically when you press the Tab key, which speeds up coding and enables you to save a lot of your valuable time.

It can also indent keywords, extend comments and trim whitespace lines. This free VS Code Python extension works by registering the Enter key as a keyboard shortcut, though sometimes it can unexpectedly override the Enter behavior.

7. Python Snippets 3

Python VSCode extensions: Python Snippets 3

Python Snippets 3 is a helpful VS Code extension that makes Python code snippets available while you are typing. It provides snippets like built-in strings, lists, sets, tuples, and dictionaries. Other code snippets include if/else, for, while, while/else, try/catch, etc.

There are also Python snippets for Object-Oriented Programming concepts such as inheritance, encapsulation, polymorphism, etc. Since this VS Code extension provides many Python code examples, it is helpful for beginners. However, note that this extension can sometimes add incorrect tab spaces.

8. Bracket Pair Colorizer 2 (CoenraadS)

Python VSCode extensions: Bracket Pair Colorizer

Bracket Pair Colorizer 2 is another VS Code Python extension that lets developers quickly identify which brackets pair with each other and makes it easier to read code. Matching brackets are highlighted with colors, and you can set tokens and colors that you want to use. This free VS Code extension can be even more helpful if your Python code contains nested conditions and loops.

Although marked as deprecated, this extension is still popular, and many users prefer it to the native Python bracket matching functionality that has since been added to VS Code.

Write Better Python in VS Code

The VS Code Python extensions we discussed here provide helpful features like automatic code completion, test running, indentation, useful snippets to learn Python, and adding different kinds of comments. These extensions help make code more accurate, improve readability, and detect bugs in the system.

One of these Python extensions, Lightrun, enables a robust Python debugging experience in production right from VS Code, and if this is what you need, get started with Lightrun today.

The post Top 8 VS Code Python Extensions appeared first on Lightrun.

]]>
Top 8 IntelliJ Debug Shortcuts https://lightrun.com/intellij-debug-shortcuts/ Mon, 06 Jun 2022 16:52:53 +0000 https://lightrun.com/?p=7360 Let’s get real – as developers, we spend a significant amount of time staring at a screen and trying to figure out why our code isn’t working. According to Coralogix, there are an average of 70 bugs per 1000 lines of code. That’s a solid 7% worth of blimps, bumps, and bugs. In addition to […]

The post Top 8 IntelliJ Debug Shortcuts appeared first on Lightrun.

]]>
Let’s get real – as developers, we spend a significant amount of time staring at a screen and trying to figure out why our code isn’t working. According to Coralogix, there are an average of 70 bugs per 1000 lines of code. That’s a solid 7% worth of blimps, bumps, and bugs. In addition to this, fixing a bug can take 30 times longer than writing an actual line of code. But it doesn’t have to be this way. If you’re using IntelliJ (or are thinking about making the switch to it), the in-built debugger and its shortcuts can help speed up the process. But first, what is IntelliJ?

What is IntelliJ?

If you’re looking for a great Java IDE, you should check out IntelliJ IDEA. It’s a robust, feature-rich IDE perfect for developing Java applications. While VSCode is excellent in many situations, IntelliJ is designed for Java applications. Here’s a quick overview of IntelliJ IDEA and why it’s so great.

IntelliJ IDEA is a Java IDE developed by JetBrains. It’s a commercial product, but a free community edition is available. Some of the features include:

  • Intelligent code completion
  • Refactoring
  • Code analysis
  • Support for various frameworks and libraries
  • Great debugger

Debugging code in IntelliJ

If you’re a developer, you will have to debug code sooner or later. But what exactly is debugging? And why do we do it?

Debugging is the process of identifying and removing errors from a computer program. Errors can be caused by incorrect code, hardware faults, or software bugs. When you find a bug, the first thing you need to do is to try and reproduce the bug so you can narrow down the problem and identify the root cause. Once you’ve reproduced the bug, you can then start to debug the code

Debugging is typically done by running a program in a debugger, which is a tool that allows the programmer to step through the code, line by line. The debugger will show the values of variables and allow programmers to change them so they can find errors and fix them.

The general process of debugging follows this flow:

  • identify the bug
  • reproduce the bug
  • narrow down where in the code the bug is occurring
  • understand why the bug exists
  • fix the bug

Debugging process

Most often than not, we spend our time on the second and third steps. Statistically, we spend approximately 75% of our time just debugging code. In the US, $113B is spent on developers trying to figure out the what, where, why, and how of existing bugs. Leveraging the IDE’s built-in features will allow you to condense the debugging process.

Sure, using a debugger will slow down the execution of the code, but most of the time, you don’t need it to run at the same snail’s pace speed through the entire process. The shortcut controls allow you to observe the meta inner workings of your code at the rate you need them to be.

Without further ado – here are the top 8 IntelliJ debug shortcuts, what they do and how they can help speed up the debugging process.

Top 8 IntelliJ Debug Shortcuts

1. Step Over (F8)

Stepping is the process of executing a program one line at a time. Stepping helps with the debugging process by allowing the programmer to see the effects of each line of code as it is executed. Stepping can be done manually by setting breakpoints and running the program one line at a time or automatically by using a debugger tool that will execute the program one line at a time.

Step over (F8) takes you to the following line without going into the method if one exists. This step can be helpful if you need to quickly pass through the code, hunt down specific variables, and figure out at what point it exhibits undesired behavior.

Step Over (F8)

2. Step into (F7)

Step into (F7) will take the debugger inside the method to demonstrate what gets executed and how variables change throughout the process.

This functionality is helpful if you want to narrow down your code during the transformation process.

Step into (F7)

3. Smart step into (Shift + F7)

Sometimes multiple methods are called on the line. Smart step into (Shift + F7) lets you decide which one to invoke, which is helpful as it enables you to target potential problematic methods or go through a clear process of elimination.

Smart step into (Shift + F7)

4. Step out (Shift + F8)

At some point, you will want to exit the method. The step out (Shift + F8) functionality will take you to the call method and back up the hierarchy branch of your code.

Step out (Shift + F8)

5. Run to cursor (Alt + F9)

Alternative to setting manual breakpoints, you can also use your cursor as the marker for your debugger.

Run to cursor (Alt + F9) will let the debugger run until it reaches where your cursor is pointing. This step can be helpful when you are scrolling through code and want to quickly pinpoint issues without the need to set a manual breakpoint.

6. Evaluate expression (Alt + F8)

It’s one thing to run your code at the speed you need; it’s another to see what’s happening at each step. Under normal circumstances, hovering your cursor over the expression will give you a tooltip.

But sometimes, you just need more details. Using the evaluate expression shortcut (Alt + F8) will reveal the child elements of the object, which can help obtain state transparency.

Evaluate expression (Alt + F8)

7. Resume program (F9)

Debugging is a constant stop and start process. The ability to toggle this process is achievable through F9. This shortcut will kickstart the debugger back into gear and get it moving to the next breakpoint.

For Mac, a keycord (Cmd + Alt + R) is required to resume the program.

8. Toggle (Ctrl + F8) & view breakpoints (Ctrl + Shift + F8)

Breakpoints can get nested inside methods – which can be a hassle to look at if you want to step out and see the bigger picture. This is where the ability to toggle breakpoints comes in.

You can toggle line breakpoints with Ctrl+F8. Alternatively, if you want to view and set exception breakpoints, you can use Ctrl+Shift+F8.

For Mac OS, the keycords are:

  • Toggle – Cmd + F8
  • View breakpoints – Cmd + Shift + F8

Toggle (Ctrl + F8) & view breakpoints (Ctrl + Shift + F8)

Improving the debugging process

If you’re a software engineer, you know that debugging is essential for the development process. It can be time-consuming and frustrating, but it’s necessary to ensure that your code is working correctly.

Fortunately, there are ways to improve the debugging process, and one of them is by using Lightrun. Lightrun is a cloud-based debugging platform you can use to debug code in real-time. It is designed to make the debugging process easier and more efficient, and it can be used with any programming language.

One of the great things about Lightrun is that you can use it to debug code in production, which means that you can find and fix bugs in your code before your users do. Lightrun can also provide a visual representation of the code being debugged. This can help understand what is going on and identify the root cause of the problem. Start using Lightrun today!

The post Top 8 IntelliJ Debug Shortcuts appeared first on Lightrun.

]]>
OpenTracing vs. OpenTelemetry https://lightrun.com/opentracing-vs-opentelemetry/ Sat, 18 Jun 2022 12:39:21 +0000 https://lightrun.com/?p=517 Monitoring and observability have increased with software applications moving from monolithic to distributed microservice architectures. While observability and application monitoring share similar definitions, they also have some differences

The post OpenTracing vs. OpenTelemetry appeared first on Lightrun.

]]>
Monitoring and observability have increased with software applications moving from monolithic to distributed microservice architectures. While observability and application monitoring share a similar definition, they also have some differences. 

The purpose of both monitoring and observability is to find issues in an application. However, monitoring aims to capture already known issues and display them on a dashboard to understand their root cause and the time they occurred. 

On the other hand, observability takes a much low-level approach where developers debug the code to understand the internal state of an application. Thus, observability is the latest evolution of application monitoring that helps detect unknown issues.

Observability vs Monitoring

Three pillars facilitate observability. They are logs, metrics, and traces. 

  • Metrics indicate that there is an issue.
  • Traces tell you where the issue is. 
  • Logs help you to find the root cause. 

Observability offers several benefits, such as the following:

According to Gartner, by 2024, 30% of enterprises will use observability to improve the performance of their digital businesses. It’s a rise from what was less than 10% in 2020.

What is OpenTracing?

Logs help understand what is happening in an application. Most applications create logs on the server on which they’re running. However, logs won’t be sufficient for distributed systems as it is challenging to find the location of an issue with logs. Distributed tracing comes in handy here, as it tracks a request from its inception to the end. 

Although tracing provides visibility into distributed applications, instrumenting traces is a very tedious task. Each tracing tool available works in its way, and they are constantly evolving. Besides, different tools may be required for different situations, so developers shouldn’t have to be stuck with one tool throughout the whole software development process. This is where OpenTracing comes into play. 

OpenTracing is an open-source vendor-agnostic API that allows developers to add tracing into their code base. It’s a standard framework for instrumentation and not a specific, installable program. By providing standard specifications to all tracing tools available, developers can choose the tools that suit their needs at different stages of development. The API works in nine languages, including Java, JavaScript, and Python. 

OpenTracing

OpenTracing Features 

OpenTracing consists of four main components that are easy to understand. These are:

Tracer 

A Tracer is the entry point of the tracing API. Tracers are used to create spans. They also let us extract and inject trace information from and to external sources. 

Span

Spans are the primary building block or a unit of work in a trace. Once you make a web request that creates a new trace, it’s called a “root span.” If that request initiates another request in its workflow, the second request will be a child span. Span can support more complex workflows, even involving asynchronous messaging.

SpanContext 

SpanContext is a serializable form of a Span that transfers Span information across process boundaries. It contains trace id, span id, and baggage items.

References 

References build connections between spans. There are two types of references called ChildOf and FollowsFrom. 

What is OpenTelemetry?

Telemetry data is a common term across different scientific fields. It is a collection of datasets gathered from a remote location to measure a system’s health. In DevOps, the system is the software application, while the data we collect are logs, traces, and metrics.  

OpenTelemetry is an open-source framework with tools, APIs, and SDKs for collecting telemetry data. This data is then sent to the backend platform for analysis to understand the status of an application. OpenTelemetry is a Cloud Native Computing Foundation (CNCF) incubating project created by merging OpenTracing and OpenCensus in May 2019. 

OpenTelemetry aims to create a standard format for collecting observability data. Before the invention of solutions like OpenTelemetry, collecting telemetry data across different applications was inconsistent. It was a considerable burden for developers. OpenTelemetry provides a standard for observable instrumentation with its vendor-agnostic APIs and libraries. It saves companies a lot of valuable time spent on creating mechanisms to collect telemetry data. 

You can install and use OpenTelemetry for free. This guide will tell you more about this framework. 

OpenTelemetry Architecture

OpenTelemetry features

You have to know the critical components of OpenTelemetry to understand how it works. They are as follows: 

API 

APIs help to instrument your application to generate traces, metrics, and logs. These APIs are language-specific and written in various languages such as Java, .Net, and Python. 

SDK

SDK is another language-specific component that works as a mediator between the API and the Exporter. It defines concepts like configuration, data processing, and exporting. The SDK also handles transaction sampling and request filtering well.

Collector 

The collector gathers, processes, and exports telemetry data. It acts as a vendor-agnostic proxy. Though it isn’t an essential component, it is helpful because it can receive and send application telemetry data to the backend with great flexibility. For example, if necessary, you can handle multiple data formats from OTLP, Jaeger, and Prometheus and send that data to various backends. 

In-process exporter 

You can use the Exporter to configure the backend to which you want to send telemetry data. The Exporter separates the backend configuration from the instrumentation. Therefore, you can easily switch the backend without changing the instrumentation. 

Differences between OpenTracing and OpenTelemetry

OpenTracing and OpenTelemetry are both open-source projects aimed at providing vendor-agnostic solutions. However, OpenTelemetry is the latest solution created by merging OpenTracing and OpenCensus. Thus, it is more robust than OpenTracing.

While OpenTracing collects only traces in distributed applications, OpenTelemetry gathers all types of telemetry data such as logs, metrics, and traces. Moreover, OpenTelemetry is a collection of APIs, SDK, and libraries that you can directly use. One of the critical advantages of OpenTelemetry is its ability to quickly change the backend used to process telemetry data. 

Overall, there are many benefits of using OpenTelemetry over OpenTracing, so developers are migrating from one to the other.

Summary

Logs, traces, and metrics are essential to detect anomalies in your application. They help to avoid any adverse effects on the user experience. While logs can be less effective in distributed systems, traces can indicate the location of an issue. Solutions like OpenTracing and OpenTelemetry provide standards for collecting this telemetry data. 

You can simplify the observability by using Lightrun. This tool allows you to insert logs and metrics in real-time even while the server is running. You can debug all types of applications, including monolithic applications, microservices, Kubernetes clusters, and Docker Swarm. Amongst many other benefits, Lightrun enables you to quickly resolve bugs, increase productivity, and enhance site reliability. Get started with Lightrun today!

The post OpenTracing vs. OpenTelemetry appeared first on Lightrun.

]]>
What is Kubernetes Lens? https://lightrun.com/what-is-kubernetes-lens/ Mon, 08 Nov 2021 17:44:24 +0000 https://lightrun.com/?p=6570 One major problem with Kubernetes is that it comes with a vast amount of moving parts and certain complexities, such as handling clusters, scaling, storage orchestration, batch execution, and more. This all hinders mainstream developer adoption.

The post What is Kubernetes Lens? appeared first on Lightrun.

]]>
As a DevOps Engineer, one day you’re performing magic in the terminal, settling clusters, and feeling like a god. On some other days, you feel like a total fraud and scam. Errors and bugs appear from everywhere, you don’t know where to start, and you don’t know where to look. Sadly, days like this come far too often. To be more specific, what often causes these bad days is none other than Kubernetes itself. While Kubernetes is the force and magic that manages your clusters, it can also be your bane.

Kubernetes is a portable, extensible open-source system for automation, deployment, scaling, and management of containerized applications and services. It is a cluster management tool that helps to abstract machines, storage, and networks away from their physical implementation. Almost everyone in the DevOps community uses Kubernetes.

However, one major problem with Kubernetes is that it comes with a vast amount of moving parts and certain complexities, such as handling clusters, scaling, storage orchestration, batch execution, and more. This all hinders mainstream developer adoption.

Another problem with Kubernetes is the use of command-line CLIs that consume and retrieve multiple files, and the use of tools like kubectl that might be good for some, but which can be overwhelming for others who may prefer GUIs.

In this article, you will learn what Kubernetes Lens is, what it does, and why it is useful.

About Kubernetes Lens – The Kubernetes IDE

Kubernetes Lens is an effective, open-source IDE for Kubernetes. Lens simplifies working with Kubernetes by helping you manage and monitor clusters in real time. It was developed by Kontena, Inc. and then acquired by Mirantis in 2020, who then open-sourced it and made it available to download for free.

Lens is a standalone application and can be installed on macOS, Windows, and some Linux flavors. With Kubernetes Lens, you can talk to any Kubernetes cluster, anywhere.

Kubernetes Lens is aimed at developers, SREs, and software engineers in general. It is most likely the only platform you will need to manage the cluster system of your Kubernetes. It is backed by a number of Kubernetes and cloud-native ecosystem pioneers such as Apple, Rakuten, Zendesk, Adobe, Google, and others.

Why Kubernetes Lens?

There are a variety of features that make Kubernetes Lens a highly attractive tool. Here is an overview of a few of them.

Cluster Management

Managing clusters in Kubernetes can be difficult, but with Kubernetes Lens, you can work on multiple clusters while maintaining context with each of them. Lens makes it possible to configure, change, and redirect clusters with one click, organizing and revealing the entire working system in the cluster while providing metrics. With this information, you can easily and very quickly edit changes and apply them confidently.

Adding a Kubernetes cluster to Lens is easy. All you need to do is point the local/online kubeconfig file to Lens and it automatically discovers and connects with it.

With Lens, you can inspect all the resources running inside your cluster, ranging from simple Pods and Deployments to the custom types added by your applications.

Built-In Visualization and Metrics

Kubernetes Lens comes with a built-in Prometheus setup that has a multi-user feature that gives role-based access control (RBAC) for each user. That means that, in a cluster, users can only access visualizations they have permission to access.

In Lens, when you configure a Prometheus instance, it is able to display metrics and visualizations about the cluster. To add Prometheus to Lens if it is not already installed, follow these steps:

  1. Right-click on the cluster icon in the upper left corner of the UI.
  2. Click Settings.
  3. Under Features, find and select the Metrics stack.
  4. Then click Install to install Prometheus stack (This may take a couple of seconds or minutes.)

Prometheus stack

After the installation, Lens autodetects Prometheus for that cluster and then begins to display cluster metrics and visualizations. You can also preview the Kubernetes manifests for Prometheus before you apply them.

With Prometheus, you get access to real-time graphs, resource utilization charts, and usage metrics such as CPU, memory, network, requests, etc., which are integrated into the Lens dashboard. These graphs and metrics are shown in the context of the particular cluster that is viewed at that moment, in real time.

Usage metrics

Kubernetes Lens also integrates with Helm, making it easy to install and manage Helm charts and releases in Kubernetes.

Helm charts

Kubernetes Lens allows you to use available Helm repositories from the Artifact Hub and automatically adds a bitnami repository by default if no other repositories are already configured. If you need to add any other repositories, those can be added manually via the command line. Do note that configured Helm repositories are added globally to the user’s computer, so other processes can see those as well. All charts from configured Helm repositories will be listed in the Apps section.

Lens Extensions

Kubernetes Lens Extensions allow you to add new and custom features and visualizations to accelerate development workflows for all the technologies and services that integrate with Kubernetes. To use Lens Extensions, go to File (or Lens on macOS) and then click Extensions in the application menu. You can install extensions in three ways on Lens:

  1. Installing the extension as a .tgz file, then dragging and dropping it in the extension management page will install it for you.
  2. If the extension is hosted on the web, you can paste the URL and click Install, and Lens will download and install it.
  3. You can also move the extension into your ~/.k8slens/extensions (or C:\Users\.k8slens\extensions) folder and Lens will automatically detect it and install the extension.

Kubernetes Lens also allows you to script your own extensions with the Lens APIs. They support adding new object details, creating custom pages, adding status bar items, and other UI modifications. Extensions can be published to npm to generate a tarball link that the Kubernetes Lens install screen can reference.

Extensions

GUI over CLI

Lens provides a way to manage Kubernetes through GUI because managing multiple clusters across various platforms and substrates means deciphering the other complexities of multiple access contexts, modes, and methods for organizing clusters, components, nodes, and infrastructure. Solving all these from the command line is difficult, slow, and fallible. This is due especially to the constant increase in the number of clusters and applications, not to mention their configurations and requirements.

With the Kubernetes Lens GUI, you can do several things:

  1. You can add clusters manually, by browsing through their kubeconfigs and can immediately identify kubeconfig files on your local machine.
  2. With Lens, you can put these clusters into workgroups in whatever way you interact with them.
  3. Lens provides visuals on the state of objects such as including Pods, Deployments, namespaces, network, storage, and even custom resources in your cluster. This makes it easy to identify and debug any issue with the cluster.

For the CLI lovers, Lens doesn’t leave you high and dry. You can also invoke its built-in terminal and execute your favorite kubectl command line.

Lens Terminal

The built-in terminal uses a version of kubectl that is API-compatible with your cluster. The terminal can:

  1. Automatically detect your cluster version and then assign or download the correct version in the background.
  2. Maintain the correct kubectl version and context as you switch from one cluster to another.

Kubernetes Lens terminal

Integrations

Lens gives you access and allows you to work with a wide variety of Kubernetes clusters on any cloud, all from a single, unified IDE. The clusters may be local (e.g., minikube or Docker Desktop) or external (e.g., Docker Enterprise, EKS, AKS, GKE, Rancher, or OpenShift). Clusters may be added simply by importing the kubeconfig with cluster details.

Lens Spaces

Kubernetes Lens promotes teamwork and collaboration via this feature called Spaces. It is a collaborative space for cloud-native development teams and projects. With a Lens space you can:

  1. Easily organize & access your team clusters from anywhere: GKE, EKS, AKS, on premises, or a local dev cluster.
  2. Easily access and share all clusters in a space securely.

Cluster Connect

In Kubernetes, sharing access to the different clusters is difficult. When working as an administrator with different providers that require you to use the same tools, or when trying to get access to kubectl files, make those files work with your kubectl. Then connect the kubectl file to the same network with the target cluster API. However, you will need to use a VPN to be in the same network as the provider, and in some cases, you will also need to use different IAM providers. These are security risks because users might bypass security best practices.

Lens uses Cluster Connect to share access to the cluster without compromising the security of the cluster.

With Kubernetes Lens Spaces, you can send and receive invitation access to other clusters. All invitations are aggregated and then exposed to you using the Lens Kubernetes proxy. To access the clusters, you download the Cluster Connect agent in the desired cluster. The agent then allows you to connect to clusters from Lens Spaces using end-to-end encryption to secure connections between you and the clusters, eliminating the need for a VPN and the need for an inbound port to be enabled on the firewall. This also means you can access and work with their Kubernetes clusters easily from anywhere.

Cluster Connect is based on the BoreD OSS software. Check out the documentation to learn more about Cluster Connect.

Kubernetes Lens terminal

Multiple Workspaces Management

Lens organizes clusters into logical groups called workspaces. This helps DevOps and SREs who have to manage multiple (even hundreds of) clusters. Usually, a single workspace contains a list of clusters and their full configuration.

Kubernetes Lens is one of the most effective Kubernetes UIs you’ll ever use. It supports CRD Helm 3 and it has a friendly GUI. Lens will, of course, also handle the cluster settings for you.

Recap of Key Features for Beginners

Kubernetes Lens provides situational awareness for everything that runs in Kubernetes, lowering the barrier to entry for developers just getting started. It is an ideal solution for many reasons, including:

  1. It provides the confidence that your clusters are properly set up and configured.
  2. There is increased visibility, real-time statistics, log streams, and direct troubleshooting facilities.
  3. The ability to organize clusters quickly and easily totally improves productivity and business speed.
  4. EKS, AKS, GKE, Minikube, Rancher, K0s, etc.—any Kubernetes you might be using—all work with Lens. You only need to import the kubeconfigs for the appropriate clusters.
  5. Kubernetes Lens is built on an open source with an active community, supported by Kubernetes and cloud-native ecosystem pioneers.

Debugging Kubernetes in Production

Kubernetes Lens consists of numerous great features, as this article has shown you. It is an independent app much unlike the built-in Kubernetes dashboard. If you use Kubernetes and appreciate the variety of its GUI, then you should definitely check out Kubernetes Lens.

For developers looking to get visibility into their code regardless of environment or deployment type, from monolith to microservices, consider Lightrun.

For advanced users of the Kubernetes stack, Lens can’t provide the type of observability and debugging capabilities that Lightrun can for production applications in real-time. With Lightrun, developers can:

  • Troubleshoot Kubernetes easily by dynamically adding logs lines
  • Add as many logs as you need until you identify the problem
  • Multi-instance support (microservices, big data workers) using a tagging mechanism
  • Explore the call stack and local variables in any location in the code in the same version they occurred in
  • Traverse the stack just like a regular breakpoint
  • Add snapshots in the IDE you’re already using, easily
  • Need more snapshots? Add as many as you need. You’re not breaking the system

Naturally, Lightrun offers a very robust yet easy to use way of monitoring the K8S stack, which you can try out yourself.

Lightrun is a secure, developer-native observability platform that enables you to add logs, snapshots, and metrics directly into your source code or application, in any environment. It really is the next level of Kubernetes Lens, allowing you to troubleshoot Kubernetes directly from any IDE.

With Lightrun, you can debug monolith microservices, Kubernetes, Docker Swarm, Big Data, and serverless in real time. Be sure to check out Lightrun for all of your cluster management needs.

The post What is Kubernetes Lens? appeared first on Lightrun.

]]>
Top 8 Database Version Control Tools https://lightrun.com/database-version-control-tools/ Fri, 22 Jul 2022 14:44:47 +0000 https://lightrun.com/?p=7550 Many DevOps teams struggle to achieve consistent builds and releases due to ineffective collaboration and communication strategies. Over 71% of software teams today are working remotely from global locations, according to a survey by Perforce and DevOps.com. Interestingly, this consistency challenge can be easily solved by a simple approach – database version control. Version control […]

The post Top 8 Database Version Control Tools appeared first on Lightrun.

]]>
Many DevOps teams struggle to achieve consistent builds and releases due to ineffective collaboration and communication strategies. Over 71% of software teams today are working remotely from global locations, according to a survey by Perforce and DevOps.com. Interestingly, this consistency challenge can be easily solved by a simple approach – database version control.

Version control streamlines database management, giving your entire team a common place to manage the codebase, communicate effectively, and collaborate easily, even from remote locations.

Managing databases using version control has been an oversight for most DevOps teams. Some of the challenges that database version control solves are:

  • Low productivity due to delays in code reviews
  • Distributed workforce leading to slow commits & merges
  • Complicated asset management as the software complexity grows
  • Databases scalability issues due to the tedious process 

In this article, we will learn what database version control is, its benefits, and the top tools to version control your database.

What is Database Version Control?

Database version control is the practice of tracking every change made to the database by every team member. Like application version control, database version control acts as a single source of truth. It empowers you with complete visibility, traceability, and continuous monitoring of the changes in your database.

Database versioning constitutes information such as database schema, indexes, views, stored procedures, functions, and database configurations. With different teams like developers and system admins working on the same database, it becomes crucial to version control your databases.

The existing market scenario demands faster application releases, made possible by simplifying application and database changes regardless of complexity. Unfortunately, the significance of database changes is often overlooked. According to the State of Database Deployments in Application Delivery, more than 57% of application alterations require corresponding database changes.

Accelerating database changes through version control has a slew of benefits. Some of them are listed below:

Version Control

  • Greater visibility – You will get improved observability into your database. Tracking changes made, team members who made the changes, historical changes, and everyone’s fingerprint. Database version control also helps you locate a bug’s source and resolve it rapidly.
  • Better collaboration – Irrespective of where your team works, they’ll always be on the same page concerning database changes. As version control forms the single source of truth, all the changes ever made are pushed to the source repository. The changes can be approved and merged in real-time after the verification by other team members.
  • Database rollbacks – Version control is an excellent backup strategy. If anything fails or doesn’t go as desired, you can quickly revert to the earlier version. These rollbacks can be utilized for root cause analysis of the issue, saving you significant time.
  • Compliance management – You can easily implement compliance and governance guidelines with a repository acting as a single source. Also, every change is tracked and logged, which makes auditing simpler.

Although version control has been a powerful concept to keep up with the software development complexity, teams often skip putting databases in version control because of challenges such as using multiple databases for development, demand for niche skills, and lack of tools for integration. With the number of tools available in the market, the task is to find the right database version control tool that fits your needs.

What to look for in Database Version Control tools?

The core idea behind version control is to ensure seamless collaboration between teams to accelerate software development. CI/CD (Continuous Integration and Continuous Delivery) is a DevOps practice integrating different code versions across stages and application deployment. When picking a database version control solution, you need to focus on the below aspects:

  • Communication capabilities – A communication channel for teams to come together to discuss or update each other is crucial in avoiding confusion and mistakes.
  • Security – As your team will be connecting to the tool from possible unsecured locations and networks, the solution should have robust security features.
  • Real-time editing – DevOps hinges on a continuous improvement concept, which means the team should be able to make the changes in real-time.
  • Traceability – Every code change should be reviewed and accounted for to avoid unwanted issues.
  • Integrations – Today’s software development scenario features a variety of development tools, and your database versioning tool must allow easy integration with different environments like microservices and Kubernetes.

Top 8 Database Version Control tools

1. Git

Git

Git is a free, open-source, widely used version control system. This distributed source control system allows you to host your data on locally saved folders called repositories. 

Your team can access all files from the local repository, which can also be stored online. With Git, a copy of a particular functionality is called a branch, and each branch comes with its history. It becomes a part of the main project only when you merge them through pull requests.

Pros:

  • Allows experimentation as you can keep your work private
  • Enables flexible workflow or process that fits you best
  • Safeguard data by effectively detecting data corruption

Cons:

  • The learning curve is pretty steep and can be overwhelming
  • Requires you to make a lot of decisions to implement changes

What are users saying?

“I like the options it provides developers to maintain repositories and help them collaborate in the best possible way.”

2. Mercurial

Mercurial

Mercurial (Hg) is a free, open-source, distributed source control management tool with an intuitive interface. Built with Python, Hg is a platform-independent tool. However, it lacks change control as you can’t edit earlier commits.

Pros

  • An easy-to-use tool that is fast and requires no maintenance. 
  • Good documentation makes it easy for non-technical contributors
  • It has better security features

Cons

  • Not as flexible as other database version control tools
  • It allows only two parent profiles

What are users saying?

“Ease-of-use when performing operations like branching, merging, rebasing, and reverting file changes.”

3. CVS

CVS

CVS (Concurrent Version System) is a solution that allows you to manage various versions of your source code. Your team can easily collaborate on the platform by sharing version files through a common repository. Unlike other tools, CVS doesn’t create multiple copies of your source code files. Instead, it keeps a single code copy but records all the changes made.

Pros:

  • High reliability since it doesn’t allow commits with errors
  • It only saves the revisions made to the code, making code reviews easy

Cons:

  • Working on CVS is a time-consuming affair
  • You can only store files in repositories

What are users saying?

“It’s simpler and less complex and has a good UI to make it easier.”

4. Lightrun

Lightrun

Lightrun is an open-source web interface and observability platform that follows Git-like methodology. Every action and change your team makes is logged and can be audited readily. You can also add logs, metrics, and traces to your app in real-time and on-demand to resolve bugs faster in any environment. It offers significant security features like an encrypted communication channel, blocklisting, and a hardened authentication process.

Pros:

  • It comes with solid observability capabilities
  • Works transparently along with applications enabling zero downtime
  • You can significantly reduce time spent on debugging
  • Easy, command-based workflows

What are users saying?

“Great tool for faster incident resolution and real-time debugging without needing to add new code.”

5. Dolt

Dolt

Dolt is a SQL database that follows the Git-like versioning paradigm, unlike other version control tools. However, Dolt versions tables instead of files, ensuring your updates and changes are never lost.

Pros

  • Partially open-source, lightweight, and easy to use
  • Convenient to analyze data because of the SQL interface

Cons

  • You will be bound to Dolt to realize its benefits
  • It is yet to be adopted widely
  • Dolt only versions tables, not any other data format

What are users saying?

“Easy to use and integrate with reports and dashboards.”

6. HelixCore

HelixCore

HelixCore is the version control solution from Perforce. It simplifies complex product development by tracking and managing changes to source code and other files. It uses the Streams feature to branch and merge your configuration changes. HelixCore makes it easy to investigate change history and is highly scalable.

Pros:

  • It comes with a native command-line tool
  • Capability to integrate with third-party tools
  • Better security with multiple authentications & access features

Cons:

  • It involves a complex workflow and user management
  • Higher resource provisions are needed, so it can get expensive

What are users saying?

“It’s extremely simple to find what you are looking for and use it to complete tasks and the ability to track assets easily.”

7. LakeFS

LakeFS

LakeFS is an open-source data versioning tool that enables you to scale your data to Petabytes using S3 or GCS for storage. It follows a Git-like branching and committing practice in line with ACID (Atomicity, Consistency, Isolation, and Durability) compliance. This way, you can make changes in private and isolation that can be created, merged, and rolled back immediately.

Pros:

  • Seamless scalability enabling large data lakes
  • Allows version control for both development & production stages
  • Offers advanced features like ACID transactions with cloud storage

Cons:

  • Being a new product, it will have frequent feature changes
  • You will need to integrate it with other tools

What are users saying?

“It is possible to develop schema changes in YAML and JSON, which is the order of the game nowadays.”

8. Liquibase

Liquibase

Liquibase is a migration-based version control database tool that uses changelog functionality to track the changes you make to your database. It defines its changesets in XML format that allows you to utilize the database schema on other database platforms. It comes in two variants – open-source and premium.

Pros:

  • Allows targeted rollbacks to undo changes
  • Supports a variety of database types
  • Enables you to specify changes in multiple formats, including SQL, XML, and YAML

Cons:

  • Advanced features are only available in the paid version
  • It needs significant time and effort to use the tool better

What are users saying?

“Easy to integrate, and we can version control the changes by maintaining all the changeset.”

Summary

Database version control is a powerful concept that can give your application development methodology an extra edge. There are multiple tools available today – both free and paid. We have listed the top 8 database versioning tools used widely today. However, you must thoroughly understand your requirements and development pipeline before choosing the tool.

Lightrun can be an ideal pick to complement your development landscape as it has strong security and observability features. Start using Lightrun today, or request a demo to learn more.

 

The post Top 8 Database Version Control Tools appeared first on Lightrun.

]]>
Top 12 Site Reliability Engineering (SRE) Tools https://lightrun.com/site-reliability-engineering-tools/ Wed, 20 Jul 2022 18:00:09 +0000 https://lightrun.com/?p=7529 Ben Treynor Sloss, then VP of Engineering at Google, coined the term “Site Reliability Engineering” in 2003. Site Reliability Engineering, or SRE, aims to build and run scalable and highly available systems. The philosophy behind Site Reliability Engineering is that developers should treat errors as opportunities to learn and improve. SRE teams constantly experiment and […]

The post Top 12 Site Reliability Engineering (SRE) Tools appeared first on Lightrun.

]]>
Ben Treynor Sloss, then VP of Engineering at Google, coined the term “Site Reliability Engineering” in 2003. Site Reliability Engineering, or SRE, aims to build and run scalable and highly available systems. The philosophy behind Site Reliability Engineering is that developers should treat errors as opportunities to learn and improve. SRE teams constantly experiment and try new things to enhance their support systems.

SRE is a new field that combines aspects of software engineering and operations. Job openings for Site Reliability Engineers surged by more than 72% in the US in 2019, making it one of the most sought-after roles. SREs provide critical value for an organization’s cyber security policy implementation and upgrades. 

What is Site Reliability Engineering?

Site reliability engineering (SRE) is an area that combines aspects of software engineering and operations. The average cost of a system’s downtime comes to around $5,600 per minute, equivalent to more than $300,000 per hour.

The main goal of SRE is to ensure that a site or service is available and performing well. SREs do so by designing and building systems that are resilient to failure and by monitoring and responding to incidents when they occur.

While this sounds a lot like DevOps – it’s not. The main difference between SRE and DevOps is that SRE places a greater emphasis on reliability and availability, while DevOps focuses on speed and agility. A Site Reliability Engineer’s role is to ensure that systems are reliable and available while providing DevOps-style automation and efficiency.

Some of the specific benefits of SRE include:

  1. Reduced downtime: By designing systems to be resilient to failure and monitoring and responding to incidents quickly, SRE can help reduce the time a site or service is unavailable.
  2. Improved quality: SRE can help improve the overall quality of service by making it more reliable and easier to operate.
  3. Reduced costs: SRE can prevent outages and disruptions and ensure that systems can recover quickly when problems occur.

Top 12 Site Reliability Engineer (SRE) Tools

SRE tools can be divided into the following categories: 

APM (Application Performance Management) and Monitoring Tools

APM tools help businesses identify and diagnose issues with their applications. Monitoring tools enable companies to identify and diagnose problems with their infrastructure. Both tools are essential for businesses to ensure that their applications and infrastructure run smoothly.

1. Datadog

Datadog

Rated 4.3 out of 5 by over 300 reviews on G2, Datadog is a monitoring service for cloud-scale applications, providing end-to-end visibility across the application stack. Organizations of all sizes use it to troubleshoot issues, gain insight into their applications, and ensure business continuity.

Datadog has many advantages, including scalability, integrations with over 350 technologies, and monitoring infrastructure and applications in a single platform. Datadog provides features specifically designed for large organizations, such as role-based access control and auditing.

However, Datadog can be expensive for large organizations. It can also lack some of the features of more specialized monitoring tools, such as application performance management (APM).

Pros: 

  • Allows for monitoring of multiple servers at once
  • Flexible and easily customizable
  • Detailed information and graphs are available
  • It can set up alerts to notify you of any issues

Cons: 

  • Can be expensive
  • It may be overwhelming if you are monitoring a lot of servers
  • Not as widely known/used as some other monitoring tools

2. Lightrun

Lightrun

Lightrun is the perfect tool for developers who want to test and debug their code in real-time. It is a cloud-based application that enables developers to identify and fix errors in their code faster and more efficiently.

Lightrun tools help developers and ops teams to work together more efficiently and to improve the quality of their services. It’s also an excellent way to test code changes in a live environment without affecting all users.

Overall, LightRun is a helpful tool for developers who want to test their code in a production environment, especially when things go wrong and an outage is impossible. It’s quick and easy to use and can save time and headaches in the long run.

Pros: 

  • Easy to use
  • It can be used to test various aspects of applications while in production 
  • It can be used to track bugs on-prem and in real-time
  • Good for keeping on top of security compliance through active monitoring
  • A free trial is available

3. New Relic 

New Relic

New Relic’s software provides real-time data about web application performance. Developers use this data to identify and diagnose issues. The software also provides insights into the performance of mobile applications. 

New Relic has a free and paid subscription. The free subscription provides data on up to 100 applications, while the paid subscription provides data on an unlimited number of applications.

Pros: 

  • It offers a wide range of features 
  • It has a strong community support 
  • It can be easily integrated with other tools 
  • A free trial is available

Cons: 

  • Relatively expensive 
  • It slows down some servers 
  • Some features can be confusing to set up

Automated Incident Response System

An automated incident response system is a system that automates incident response tasks, such as identifying, containing, and eradicating incidents. This can be done by integrating multiple security tools and technologies to streamline the incident response process. Automated incident response systems can help businesses by reducing the time and resources needed to respond to incidents and improving the effectiveness of the incident response.

4. Grafana 

Grafana

Grafana is a data visualization tool that allows you to see and analyze data in real-time. Developers and data scientists use it to debug applications and understand data flows. Grafana has various uses, including monitoring server performance, visualizing database queries, and monitoring application performance.

Grafana is open source and free to use. It is available for Windows, Mac, and Linux. Grafana is easy to use and has a wide variety of plugins. Grafana also provides built-in data sources and has alerting capabilities.

Pros: 

  • Allows for easy creation and visualization of complex data queries
  • It can be used to monitor multiple data sources easily
  • It is highly customizable and allows for the creation of custom dashboards

Cons: 

  • It may be overwhelming for users who are not familiar with data visualization
  • It can be challenging to set up and configure
  • Limited documentation and support

5. PagerDuty

PagerDuty

PagerDuty is an automated incident response system that organizations use to help manage and respond to incidents. It is a cloud-based platform that provides users with the ability to create and manage incidents, as well as to track and monitor response times and incident resolution.

PagerDuty has some features that make it a valuable tool for managing critical incidents. It allows organizations to create and manage incident response plans, track and manage incidents, and communicate with incident response teams. It also provides a variety of reports and tools for analyzing and responding to incidents.

PagerDuty also has some drawbacks: it can be challenging to set up and use, and it can be expensive. It also lacks some features that would be useful for managing critical incidents, such as the ability to integrate with other incident response systems.

Pros:

  • Easily integrate with other tools and systems
  • Flexible and customizable
  • It can be used for on-call scheduling
  • Real-time visibility into incidents

Cons:

  • Can be expensive
  • Complex to set up
  • Not all features are available in all plans
  • It can be challenging to use for some users

 

6. HoneyComb

HoneyComb

One of the key benefits of using Honeycomb is that it can help organizations save time and resources when responding to security incidents. The system’s automated incident response capabilities can help organizations quickly identify and investigate the root cause of an incident. Additionally, the integration with SIEM systems can help organizations automate many tasks associated with incident response, such as threat analysis and classification.

While Honeycomb can be a valuable tool for incident response, the system can be expensive to purchase and implement. Additionally, Honeycomb requires a high degree of technical expertise to configure and use effectively. The system’s reliance on data from multiple sources can make it challenging to use in environments where data is siloed.

Pros: 

  • Can help identify slow or inefficient queries
  • Can track database activity over time
  • Can help optimize database performance
  • Provides a web-based interface for easy access

Cons:

  • Requires a paid subscription
  • It may be challenging to set up and configure
  • It may not be compatible with all database systems
  • Limited customer support

Real-Time Communication tools

Real-Time Communication (RTC) tools are software applications that allow users to communicate with each other in real-time. RTC tools are typically used for voice and video communication but can also be used for text-based communication, file sharing, and collaboration.

RTC tools are suitable for businesses and their teams because they allow for quick and efficient communication between team members. Teams can use RTC tools for various purposes, such as team meetings, training sessions, and customer support. RTC tools also help improve communication between remote team members.

7. Microsoft Teams

Microsoft Teams

Microsoft Teams is a real-time communication tool part of the Microsoft Office 365 suite of productivity tools. It is designed for businesses of all sizes and offers a variety of features, including file sharing, chat, video conferencing, and more. However, it requires a subscription to Office 365. 

Pros: 

  • Allows for accessible communication and collaboration between team members
  • It can be accessed from anywhere with an internet connection
  • Integrates with other Microsoft products
  • It has a variety of features and tools to improve productivity

Cons: 

  • It may be challenging to learn how to use all the features
  • It can be glitchy or slow at times
  • Some features may not be available in all countries

 

8. Slack

Slack

Slack is a real-time communication tool that allows users to communicate with each other via messaging. It is similar to other messaging tools such as WhatsApp and Facebook Messenger but has some unique features that make it stand out.

The pros of Slack include user-friendliness and integrating well with a wide variety of tools and services. However, keeping up with all the messages can be overwhelming if team members are part of too many channels.

Pros: 

  • Allows for clear and concise communication within a team
  • It helps to keep everyone organized and on the same page
  • It can be accessed from anywhere
  • It makes it easy to find old conversations

Cons: 

  • It can be a distraction if not used properly
  • It can be overwhelming if there are too many channels
  • People can easily get lost in conversation threads

 

9. Telegram

Telegram

Telegram is a messaging app focused on speed and security. It’s super-fast, simple, and accessible. You can use Telegram on all your devices — your messages sync seamlessly across any number of your phones, tablets, or computers.

With Telegram, you can send messages, photos, videos, and files of any type (doc, zip, mp3, etc.), as well as create groups for up to 200,000 people or channels for broadcasting to unlimited audiences. 

You can write to your phone contacts and find people by their usernames, like SMS and email combined. The main drawback of Telegram is that it is banned in some countries, which may be a significant pain if your team members are spread across the globe.

Pros: 

  • It can be used on multiple devices 
  • It has a self-destruct feature 
  • It can be used without a phone number 

Cons: 

  • Security concerns 
  • It may be blocked in some countries 
  • It is less popular than other messaging apps

Configuration Management tools

Configuration management tools help businesses and their teams manage configurations, or settings, across their environment. Configuration management tools automate and simplify setting and maintaining consistent configurations across multiple servers and devices. This can help businesses avoid configuration drift, leading to inconsistency and errors. Configuration management tools can also help companies to recover from configuration changes that cause problems.

10. Ansible 

Ansible

Ansible is a configuration management tool that automates tasks, such as software deployments, provisioning, and configuration. It is often used for managing server deployments and managing both small and large-scale infrastructure. It is also open source and is available for free.

The tool is simple and easy to use. It is agentless, meaning it does not require any software installed on the target machines. Ansible is also idempotent, so running a task multiple times will have the same effect as running it once.

It is a popular configuration management tool because it is easy to use and doesn’t require any special software installed on the target machines. However, because Ansible is agentless, it can be difficult to troubleshoot when things go wrong.

Pros: 

  • It is straightforward to use and doesn’t require any unique setup or configuration
  • Ansible playbooks are easy to read and understand
  • It can be used to manage a large number of servers from a central location
  • It can be used to automate many system administration tasks

Cons: 

  • Ansible playbooks can become very complex and challenging to maintain
  • It can be slow to run, especially on large systems
  • Ansible can be tricky to debug
  • It is not a good choice for real-time management of servers

11. SaltStack 

SaltStack

SaltStack is a Python-based configuration management tool to manage server configurations, deployments, and orchestration.

However, it is not as widely used as some other configuration management tools, so there is less community support and fewer resources available. Additionally, SaltStack can only work on Linux servers.

Pros: 

  • Saltstack can manage large numbers of servers very efficiently
  • Saltstack’s declarative approach to configuration management means that configurations are easy to understand and maintain
  • It is very scalable and can be used to manage thousands of servers
  • It is fast and can apply changes to a large number of servers very quickly

Cons: 

  • Saltstack can be complex to learn and use
  • Requires a good understanding of system administration to be used effectively
  • Saltstack can be difficult to debug when things go wrong
  • It can be resource-intensive and may not be suitable for minimal deployments

 

12. Terraform

Terraform

Terraform is a configuration management tool used to manage infrastructure as code. It is popular among DevOps professionals because it is declarative, meaning that it describes the desired state of the infrastructure. It is also idempotent, so running the same Terraform configuration multiple times will result in the same final form.

Advantages of Terraform include infrastructure as code and execution plans. However, a significant drawback for teams is its learning curve and potential vendor lock-in for complex designs.

Pros: 

  • It can manage large-scale deployments
  • It can easily provision resources
  • It can an manage dependencies between resources
  • It can automate deployment processes

Cons: 

  • Difficult to learn
  • Difficult to manage complex configurations and to debug
  • It can be slow

Next Steps

DevOps teams can’t overstate the importance of having an SRE tool, as having the right tool can make all the difference in keeping your business up and running.

Lightrun provides all of the features you need to manage your applications effectively. It offers application performance monitoring, application management, and even application security features. If you’re looking for a way to automate the implementation and maintenance of your logging, metrics, and tracing, then Lightrun is the tool for you. Start using Lightrun today.

 

The post Top 12 Site Reliability Engineering (SRE) Tools appeared first on Lightrun.

]]>