Google Kubernetes Engine or GKE provides a managed ecosystem for implementing, managing, and enhance the container-based apps using Google infra. The GKE or Kubernetes Engine environment includes different machines (particularly Compute Engine instances) classified together to form a container cluster. It is useful to deploy even complex web apps.
In this article, we will go through Google Kubernetes Engine or GKE and deploy a sample app on a GKE cluster. Kubernetes is the most preferred container orchestration tool in present era and is useful to deploy applications delivered under packages. Moreover, most cloud providers such as AWS, Google Cloud, and Azure, have their own managed Kubernetes service offerings. Kubernetes Online Course will helps you to learn more effectively.
Kubernetes Engine is powered by Kubernetes open-source cluster management system.
How to deploy application in Kubernetes
Let us discuss the procedure of deploying application in GKE.
At first, you have to login into your account and then open the “Google Cloud Platform” console. On the left side, you will find an option named Kubernetes Engine under the category “compute”.
Then Click on the Quick Start button.
Give Name to the Cluster
This guide helps you to develop a Kubernetes cluster with the amount of nodes you require for the application. You can make configuration to everything within this cluster.
To begin from the left side, we can see the first step. Then we have to provide a name to the cluster. The name can be anything as we like.

Select a Location
In the second step we have to pick up the location for the sake of simplicity. We can select any. But while using it in production, we have to see where we are getting the lower latency from, so that it can serve end-users much faster. Now we can deploy the cluster region-wise or zone wise. The cost adds up if we use it region-wise and not on a single-zone.
Set Release Channel
Next, we have to set the Release channel for it. For this we can choose the version of the Kubernetes engine in this step. Further, the Cluster release channel can't be modified, so we have to choose carefully. Hereunder, there are three different versions - Rapid, Regular, and Stable.
Rapid:
On this version, we will get the fastest upgrade and also will have the latest version of Kubernetes Engine all the time. But sometimes there might be some bugs or errors that have no limitation. Thus, we can recommend that this release is not for production-grade.
Regular:
The Regular version is useful for customers wants to test latest releases before they qualify for production. Some known issues/errors will occur, but there will be limitations for them. Kubernetes Certification Training will help you to learn more skills and techniques.
Stable:
This Stable version has been tested and passes all the tests to be stable for production-grade apps. Here, users have the option to select a static version of the Kubernetes engine (GKE) by selecting the static version.
Choose the right Resources
Here, a user can use and configure his node.
· The system family includes three types, and user can select according to his requirements. The memory-optimized system is useful for intensive memory load, like real-time model learning. The compute optimized is a high-performance system that may be useful for scientific modeling. There is also a general-purpose system than can perform similarly well on all assignments.
· There is a type of Machine from which user can select from the range of x CPUs and x Memory. Being under free credit status and trying it for learning purposes, user should always make choice of the lowest option. Because it saves him a credit, and he can do much more using that credit.
· Allowing auto-scaling means user can enhance nodes up and down. Since, we are not enabling/allowing it here as our final aim is to learn, but on production side, user can set the size according to clients.
· With Telemetry, users can enable logging of their system status like any crashes, incidents, and much more.
Cluster Reviewing
The user will review the settings and then clicks make a change and then create so that Google Cloud Platform will start operating the cluster. And within a minute or so the cluster will be initialized or started.
After this we will try to deploy the application in GKE in the following way.
Deploy sample application in Kubernetes
Here we will design a Hello-app using the following code snippet.
Now we will install Google Cloud tool using the Google server.
gcloud components install kubectl
After installing the Kubernetes client we have to set default zone & project id.
gcloud config set project i-return-273913
gcloud config set compute/zone us-central1-c
As we have configured the default project and computing zone, now we need authentication details for the cluster.
gcloud container clusters get-credentials my-first-cluster-1
Our cluster is named as “my-first-cluster”, but users can change it to their own cluster name.
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
Now, we will build a deployment using name “hello-server” with the sample image of hello-server written in GoLang script hello-app: 1.0 with version 1 tag. Moreover, users have option to increase replicas by providing --replicas=3 flag to enhance pod to 3.
kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
Later, we can expose our deployment using a load balancer service on port 80. Now the app will be accessible for the Internet as well.
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-server-5bfd595c65-lm5rr 1/1 Running 0 2m26s
Here, we can check for pods in case replicas not provided as a default will be single pod. Users pod will be in a running state and ready to serve.
kubectl get service hello-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-server LoadBalancer 10.105.2.251 35.223.166.136 80:30098/TCP 71s
Furthermore, we or user can now check the running status of the GKE service that we or user have created. The pods are exposed to the internet through a Kubernetes service.
While, Pods include their independent IP address which can be reached through inside the cluster. Moreover, GKE Pods are designed to be momentary, spinning up or down based on scaling needs. And whenever a Pod crashes due to any error, GKE automatically redeploys that Pod, allotting a new Pod IP address each time.
This means for any Deployment, the set of IP addresses respective to the active set of Pods is dynamic always.
Summing Up
Thus, we have reached to conclude the topic on deployment of the applications using GKE. This makes the sense of using GKE to deploy apps with good knowledge. To get more updates on this, go through the Kubernetes Online Training.