Summary
Ingress objects can interfere between each other in the cluster. If you deploy incorrect ingress definition (and kubernetes won't detect the error), Nginx Ingress Controller will fall into restart loop and won't accept any new configuration. This will lead to broken deployments from that time onwards.
Description
I have been trying out how wildcards in Nginx Ingress Controller work. This is required when you want to have an URL prefixed with the application's language or a country.
Read More →
Summary There is an open source project from JetStack called kube-lego. It allows you to automatically request SSL certificates for your Kubernetes cluster using Let’s Encrypt free service. Working with Let’s Encrypt using Kube-lego is quite straightforward. Nginx Ingress Controller has built-in support for kube-lego. Having RBAC might seem like a complication but in fact it doesn’t add much of a complexity to the solution.
Requirements Kubernetes 1.8.0 or higher with Nginx Ingress Controller deployed 30 minutes of spare time Description Let’s Encrypt is a service that provides you with automatic TLS/SSL certificate provisioning for your website.
Read More →
Description This guide will let you install Kubernetes 1.8 with RBAC enabled on so called “baremetal” along with Calico network plugin. Although there is nothing specific to baremetal, that’s what I will use to install it. This cluster can be then used for small projects, once exposed to the Internet. That’s excatly what I use it for :)
Requirements Spare machine with Ubuntu installed and at least a few GB of RAM.
Read More →
History about a Prometheus eating up 20GB of RAM
2017-11-30 UPDATE
As those few days have passed I have not had any problems with Kubernetes cluster being unresponsive. Therefore this article concludes a few weeks of investigation why machine could entirely freeze. Set limits to your pods so that they won't kill your node.
The History
WeaveWorks Cloud DaemonSet deploys by default Prometheus to the cluster. Prometheus scrapes metrics out of your cluster and stores them and creates time-series data out of them ( this might not be the accurate description of what Prometheus does, but it's good enough for what just hapened ).
Read More →
EDIT 24-11-2017 The solution below did not help me fix freezing cluster. I have identified since then, though, another problem with the setup. Prometheus was eating up all memory of a machine. The link to the article:
https://cwienczek.com/lesson-1-always-set-limits-to-containers-running-in-your-cluster/
Summary If you’ve:
installed Kubernetes on your baremetal machine that machine is running Ubuntu and your machine is suddenly unresponsive, then: Make sure you don’t have docker 1.9.1 as per this topic: [kubelet high CPU][https://github.
Read More →
Kubernetes helm repository supports only basic authentication at the time of writing this article. There is, though, another and perhaps simpler way as of helm 2.7.0. Using Azure Blob Storage you can easily make your helm repository private.
Requirements Time: ~10 minutes Helm Package Manager 2.7.0-rc1 or later Microsoft Azure account, at least with permissions to create azure storage account Azure CLI, tested on 2.0.19 Darwin Helm chart that you can upload to the cloud Summary Create azure storage account in one of your resource groups Add blob storage container to Azure Storage Account and set access to private Go to Storage account -> Shared Access Signature and generate read-only credentials for helm users The url to your repository will be: https://[azure_storage_name].
Read More →
git clone https://github.com/mcwienczek/coreos-kubernetes cd coreos-kubernetes/multi-node/vagrant vagrant up export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig" kubectl config use-context vagrant-multi kubectl get nodes 172.17.4.101 NotReady 1m v1.7.3+coreos.0 172.17.4.201 NotReady 1m v1.7.3+coreos.0 172.17.4.202 NotReady 1m v1.7.3+coreos.0 172.17.4.203 NotReady 1m v1.7.3+coreos.0 172.17.4.101 Ready 7m v1.7.3+coreos.0 172.17.4.201 Ready 7m v1.7.3+coreos.0 172.17.4.202 Ready 7m v1.7.3+coreos.0 172.17.4.203 Ready 7m v1.7.3+coreos.0 kubectl get pods -n kube-system calico-node-192jz 2/2 Running 0 8m calico-node-44zlw 2/2 Running 0 8m calico-node-6t101 2/2 Running 0 8m calico-node-gxbcf 2/2 Running 0 8m calico-policy-controller-07g7r 1/1 Running 0 8m heapster-v1.
Read More →
I’ve recently had a problem with exposing my minikube to the outer world. The use case was to test the cluster from mobile device. Officially minikube supports “host-only” network interface. Fortunately, there is a way to access minikube cluster from external world, and it’s quite simple, but nowhere documented.
There is one downside to this method: I had to use virtualbox driver for spinning up cluster as I still don’t know whether you can forward ports using xhyve.
Read More →