Publication

Introduction to Kubernetes

3 Useful Tools
Download the eBookDownload the eBook

Download Our Free eBook

Get a link to read and download your free copy of our eBook here!

Chapter

6

Running GKE in Production

Running GKE in Production

After comparing the managed offerings from Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), Leverege chose Google Kubernetes Engine (GKE) to run its large-scale IoT solutions in production. Every organization’s needs are different, but given the unique constraints that the current IoT landscape poses, here’s why Leverege chose GKE and what we’ve learned.

Best Price

First and foremost, GKE provides the best price for any managed Kubernetes service. Unlike Amazon Elastic Container Service for Kubernetes (Amazon EKS), which charges for the management of master nodes ($0.20/hr), GKE and Azure Kubernetes Service (AKS) only charge for the virtual machines (VMs) running the K8 nodes. This means that master node usage, cluster management (auto-scaling, network policy, auto-upgrades), and other add-on services are provided free-of-charge. Due to Google’s automatic sustained use discount and the option to use pre-emptibles, we found that GKE ends up being slightly cheaper than AKS.

Disclaimer: Pricing comparisons across the three vendors are tricky since enterprise discounts, exact configuration of machines, and storage/GPU needs vary significantly. But in general for a standard setup, Leverege found GKE to be the cheapest.

Low cost is extremely important for price-sensitive IoT projects, especially for those using a low-power wide-area (LPWA) network. Let’s take a tank monitoring solution for example. A typical deployment would have a small sensor reporting to the network a few times a day with a  very small data payload to conserve power. From our experience, customers are only willing to pay around $1/month per sensor to achieve an adequate return on investment (ROI) for a simple IoT system. Once the hardware and network costs are factored in, the budget left for cloud infrastructure becomes increasingly constrained. And this is before factoring in operational costs unrelated to pure technical support.

Superior Onboarding & Management Experience

When you are paying for a managed service, you expect a seamless onboarding experience and minimal intervention on your part to continually manage the infrastructure. With GKE, you get the best experience regarding Kubernetes management.

When you begin, you can easily spin up a cluster with a few clicks. After enabling the Kubernetes Engine API, you can specify the VM type,the number of nodes, and then click create. GKE takes care of regional deployment, auto-scaling, auto-upgrades of both master and child nodes, as well as network policy configuration via Calico. While these are all things you can self-manage and deploy on AKS and Amazon EKS, it’s a free service provided and managed by Google so you can focus on application development on top of Kubernetes.

The best part of GKE, in our experience, is the cluster autoscaler feature. GKE will automatically add or delete nodes when Kubernetes can no longer schedules Pods on the existing nodes. Cluster autoscaler is tied to a node pool, so you can autoscale different node pools when needed. This feature really shines when you run a mixed node pool of pre-emptibles and nodes reserved for intensive data workloads. Perhaps you elect to run a node pool of pre-emptibles for non-critical workflow that can scale on demand, while keeping some data-intensive or stateful sets on another node pool. GKE will be Kubernetes-aware and scale automatically for you, which is not provided by default on Amazon EKS or AKS.

Another feature we like about GKE is its close integration with Stackdriver, which has come a long way in recent years. With Stackdriving agents already installed onto GKE nodes, you can easily use preset GKE monitoring charts and logging before electing to set up more custom systems (e.g. Prometheus/Grafana, ELK).

Leading Contributor to K8 & Related Work

Before Google open-sourced Kubernetes, it was running Borg and Omega internally for years. Back in 2014, Google shared that they were already firing up over two billion containers per week. No other player can match Google’s 10+ years of container management at that scale. This is most evident in the number of new features that get rolled into GKE before Amazon EKS or AKS. The newest versions of Kubernetes are often available on GKE only, not to mention the beta features including tensor processing unit (TPU) support.

Google’s work also shines in Kubernetes (K8)-related technologies. Google has been an important contributor to Spinnaker, a battle-tested Continuous Deployment (CD) provider, as well as Istio, a cloud-native service mesh developed with IBM and Lyft. Google is also active on toolkits and extensions for machine learning on K8 (kubeflow) as well as supporting serverless workloads. As the announcement from Google NEXT 2018 showed, Google is committed to growing the Kubernetes community and is a great platform to start that journey.