Developers
June 11, 2020

Debugging Tools in Google Kubernetes Engine

Debugging in GKE made easy. There are 2 specific ways to debug professionally.
Source: Unsplash

Today we will talk about some tools that can be helpful at the time of debugging your apps in Google Kubernetes Engine.

Google Kubernetes Engine is a system for Docker containers. The service works also for container clusters that run in the Google cloud service. GKE is based on the open-source container management system, Kubernetes.

GKE is used for several reasons, and as a developer, you might want to check it out. You can create Docker clusters, create container pods, controllers, and services.

GKE provides a double-layered security process. Where it defends the containerized workloads. Private clusters running on GKE can be restricted to private mode or public mode and only certain addresses can access the service.

To use GKE, you can use the Google Cloud command line within the Google Cloud Platform Console. GKE is mostly used by developers that are creating enterprise applications. It also works well for web servers.

Cloud Monitoring and Logging

There are two main ways developers can debug their applications faster. Cloud monitoring and logging. This two ways facilitate speed and efficiency at the time of resolving code issues.

The main way Cloud monitoring helps the development team, is by providing Alerts. By the Alerting process, the developers can receive notifications via email, SMS, or directly into the third-party applications

A common error developers get in GKE is an HTTP 500 error. Let's follow that example of how we can debug it quickly. First, you have to set up your Alerting settings. After it's done, you will receive the incidents detailed and there will be a "view incident" button in the form of a link. After clicking this link, it will open the alerts section of the monitoring user interface.

 In the following example, the developer receives an alert from Cloud Monitoring about an application running on a Kubernetes error which had an error. The developer needs to report the error. This app has multiple microservices and dependencies. 

The first place the developer has to look for error's data is on the GKE Monitoring console. Using the workload view, you have to select the used cluster and there you can see the usage data and resources for the containers that are running in the selected cluster.  

Reported Error

The error is saying that there is a very high CPU utilization. Leading to an overload in the machine which is not to respond to the requests from the user in the frontend. You can set up CPU and memory utilization alerts, so in case this happens, it is solved in less time. Despite the original error was a 500 HTTP error, we can see how we find another error by using the console. The CPU power is in demand.

Using the Logs viewer, the developer can see detailed queries, the history of the logs, and the log entries. In the history section, one can see how often there are log entries and can be a powerful tool to help identify application issues.  

When the developer expands the error entries, will be able to see something like this: "failed to get product recommendations: RPC error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial TCP 10.55.247.125:8080: connect: connection refused" This error message matches the original HTTP 500 error of the frontend.  

What has been found is that in the "recommendation service" within the logs, there are no errors generated by the service. This confirms that the error comes from the code deployment that causes the CPU to be used in excess. Now that you know this, you can either uplift the CPU to match its utilization or run the program in a way that is less costly to the machine.

The second and last way to debug is using logging. For this, you can use the direct logging, or use an entire application that can be configured to follow the process. The application is called Elastic GKE logging. It provides features for collecting and analyzing logs within a Kubernetes cluster.

In conclusion, if you are currently using Google Kubernetes Engine, be sure to know the best practices to debug in the best way. When It's said “the best way” I mean the most efficient and quick way for the developer. The two practices are Cloud monitoring and logging. Cloud monitoring consists of debugging through the monitoring console. Monitoring consists of alerts. These alerts can be set up by the developer so that when there's an error, an alert notifies him. The developer can receive alerts via email or SMS. Logging consists of collecting and analyzing the logs of the application. Logging can be done directly through the logs or by using a helping app called Elastic GKE logging.

TagsGoogle KubernetesDebug
Lucas Bonder
Technical Writer
Lucas is an Entrepreneur, Web Developer, and Article Writer about Technology.

Related Articles

Back
DevelopersJune 11, 2020
Debugging Tools in Google Kubernetes Engine
Debugging in GKE made easy. There are 2 specific ways to debug professionally.

Today we will talk about some tools that can be helpful at the time of debugging your apps in Google Kubernetes Engine.

Google Kubernetes Engine is a system for Docker containers. The service works also for container clusters that run in the Google cloud service. GKE is based on the open-source container management system, Kubernetes.

GKE is used for several reasons, and as a developer, you might want to check it out. You can create Docker clusters, create container pods, controllers, and services.

GKE provides a double-layered security process. Where it defends the containerized workloads. Private clusters running on GKE can be restricted to private mode or public mode and only certain addresses can access the service.

To use GKE, you can use the Google Cloud command line within the Google Cloud Platform Console. GKE is mostly used by developers that are creating enterprise applications. It also works well for web servers.

Cloud Monitoring and Logging

There are two main ways developers can debug their applications faster. Cloud monitoring and logging. This two ways facilitate speed and efficiency at the time of resolving code issues.

The main way Cloud monitoring helps the development team, is by providing Alerts. By the Alerting process, the developers can receive notifications via email, SMS, or directly into the third-party applications

A common error developers get in GKE is an HTTP 500 error. Let's follow that example of how we can debug it quickly. First, you have to set up your Alerting settings. After it's done, you will receive the incidents detailed and there will be a "view incident" button in the form of a link. After clicking this link, it will open the alerts section of the monitoring user interface.

 In the following example, the developer receives an alert from Cloud Monitoring about an application running on a Kubernetes error which had an error. The developer needs to report the error. This app has multiple microservices and dependencies. 

The first place the developer has to look for error's data is on the GKE Monitoring console. Using the workload view, you have to select the used cluster and there you can see the usage data and resources for the containers that are running in the selected cluster.  

Reported Error

The error is saying that there is a very high CPU utilization. Leading to an overload in the machine which is not to respond to the requests from the user in the frontend. You can set up CPU and memory utilization alerts, so in case this happens, it is solved in less time. Despite the original error was a 500 HTTP error, we can see how we find another error by using the console. The CPU power is in demand.

Using the Logs viewer, the developer can see detailed queries, the history of the logs, and the log entries. In the history section, one can see how often there are log entries and can be a powerful tool to help identify application issues.  

When the developer expands the error entries, will be able to see something like this: "failed to get product recommendations: RPC error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial TCP 10.55.247.125:8080: connect: connection refused" This error message matches the original HTTP 500 error of the frontend.  

What has been found is that in the "recommendation service" within the logs, there are no errors generated by the service. This confirms that the error comes from the code deployment that causes the CPU to be used in excess. Now that you know this, you can either uplift the CPU to match its utilization or run the program in a way that is less costly to the machine.

The second and last way to debug is using logging. For this, you can use the direct logging, or use an entire application that can be configured to follow the process. The application is called Elastic GKE logging. It provides features for collecting and analyzing logs within a Kubernetes cluster.

In conclusion, if you are currently using Google Kubernetes Engine, be sure to know the best practices to debug in the best way. When It's said “the best way” I mean the most efficient and quick way for the developer. The two practices are Cloud monitoring and logging. Cloud monitoring consists of debugging through the monitoring console. Monitoring consists of alerts. These alerts can be set up by the developer so that when there's an error, an alert notifies him. The developer can receive alerts via email or SMS. Logging consists of collecting and analyzing the logs of the application. Logging can be done directly through the logs or by using a helping app called Elastic GKE logging.

Google Kubernetes
Debug
About the author
Lucas Bonder -Technical Writer
Lucas is an Entrepreneur, Web Developer, and Article Writer about Technology.

Related Articles