A playground to learn how to use Knative on Kubernetes clusters.

  • By Harun Şaşmaz
  • Last update: Jun 2, 2022
  • Comments: 0

Cluster Monitoring

A playground to learn how to use Knative on Kubernetes clusters.

build lint

Our toy service retrieves information about deployments running in the cluster.

Learning Outcomes

  • Using Kubernetes Go SDK
  • Creating a cluster with Terraform
  • Installing Knative & Istio to cluster
  • Using custom domains to publish a service
  • Providing secure connection with HTTPS


  • gcloud: Google Cloud Platform CLI
    • brew install --cask google-cloud-sdk
  • kubectl: Kubernetes CLI
    • brew install kubectl
  • kn: Knative CLI
    • brew install kn
  • Go 1.17
  • Docker
    • brew install docker
  • terraform: Terraform CLI
    • brew install hashicorp/tap/terraform

Service Endpoints

Expose information on all pods in the cluster

An endpoint to the service that exposes all pods running in the cluster in a given namespace:

GET `/services/{namespace}`
    "name": "first",
    "applicationGroup": "alpha",
    "runningPodsCount": 2
    "name": "second",
    "applicationGroup": "beta",
    "runningPodsCount": 1

Expose information on a group of applications in the cluster

An endpoint in our service that exposes the pods in the cluster in a given namespace that are part of the same applicationGroup:

GET `/services/{namespace}/{applicationGroup}`
    "name": "foobar",
    "applicationGroup": "<applicationGroup>",
    "runningPodsCount": 1

Creating a Cluster

  1. Apply Terraform definitions
cd infrastructure/terraform
terraform init
terraform apply
  1. Authorize kubectl
gcloud container clusters get-credentials <CLUSTER_NAME>
  1. Verify kubectl
kubectl cluster-info
  1. Allow Kubernetes to pull images
kubectl create secret docker-registry gcr-access-token \
--docker-server=eu.gcr.io \
--docker-username=oauth3accesstoken \
--docker-password="$(gcloud auth print-access-token)" \
[email protected]

Setup Knative & Istio

Upgrade cluster version to 1.22+

gcloud container clusters upgrade <CLUSTER_NAME> --master --latest

This may take a few minutes.

Install Knative Serving with YAML

First, you should check minimum system requirements for Knative to operate in Kubernetes cluster.

  1. You should apply the following manifest to install required CRDs:
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.4.0/serving-crds.yaml
  1. You should apply the following manifest to upload the core components:
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.4.0/serving-core.yaml
  1. Verify the installation
kubectl get pods -n knative-serving

Output should be similar to the following, you should see all pods running

NAME                                      READY   STATUS    RESTARTS   AGE
3scale-kourier-control-54cc54cc58-mmdgq   1/1     Running   0          81s
activator-67656dcbbb-8mftq                1/1     Running   0          97s
autoscaler-df6856b64-5h4lc                1/1     Running   0          97s
controller-788796f49d-4x6pm               1/1     Running   0          97s
domain-mapping-65f58c79dc-9cw6d           1/1     Running   0          97s
domainmapping-webhook-cc646465c-jnwbz     1/1     Running   0          97s
webhook-859796bc7-8n5g2                   1/1     Running   0          96s

Install Istio with YAML

  1. Install Istio on cluster
kubectl apply -l knative.dev/crd-install=true -f https://github.com/knative/net-istio/releases/download/knative-v1.4.0/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.4.0/istio.yaml
  1. Install Knative Istio controller
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.4.0/net-istio.yaml
  1. Verify installation
kubectl get pods -n istio-system

Output should be similar to:

NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-666588bf64-7x66v   1/1     Running   0          120m
istio-ingressgateway-666588bf64-cvfnb   1/1     Running   0          120m
istio-ingressgateway-666588bf64-qh4gl   1/1     Running   0          120m
istiod-56967d8fcc-7w2n4                 1/1     Running   0          120m
istiod-56967d8fcc-lfmvk                 1/1     Running   0          120m
istiod-56967d8fcc-v44c7                 1/1     Running   0          120m
  1. Fetch external IP of your ingress gateway
kubectl --namespace istio-system get service istio-ingressgateway

External IP provided from this output will be used to configure custom domain and HTTPS configurations.

Auto TLS and Custom Domain Configuration

Install Cert-Manager

  1. [OPTIONAL] if you are using Google Kubernetes Engine, you should give permission to account of your GCP account by following:
kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)
  1. Install all cert-manager components
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
  1. Verify installation
$ kubectl get pods --namespace cert-manager

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-5c6866597-zw7kh               1/1     Running   0          2m
cert-manager-cainjector-577f6d9fd7-tr77l   1/1     Running   0          2m
cert-manager-webhook-787858fcdb-nlzsq      1/1     Running   0          2m
  1. Install Knative Cert-Manager core components
kubectl apply -f https://github.com/knative/net-certmanager/releases/download/knative-v1.4.0/release.yaml

Custom Domain Configuration

For this project, I used my personal domain harunsasmaz.com which is registered on Cloudflare

  1. Open config-domain ConfigMap to edit
kubectl edit configmap config-domain -n knative-serving
  1. Edit file to replace using your domain instad of example.com

First, delete all parts under _example key and then add your domain under data section as following. Note that, right side is "" intentionally.

apiVersion: v1
  mydomain.com: ""
kind: ConfigMap
  1. Publish your domain

First, you should visit your DNS provider dashboard. Then, create a wildcard A record with target IP as your cluster's external IP as we mentioned under Istio section.

*.default.mydomain.com   59     IN     A   <EXTERNAL_IP>


  • * is the wildcard, any URLs ending with default.mydomain.com will be redirected to provided IP.
  • default means that your service running in default namespace.
  • A is the record type.
  • EXTERNAP_IP is the target address that your DNS provider will redirect incoming requests.

Create a Cluster Issuer

  1. Create a YAML file using the following template
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
  name: letsencrypt-http01-issuer
      name: letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
    - http01:
         class: istio

you may change names of private key secret and cluster issuer.

  1. Apply the YAML file
kubectl apply -f <filename>.yaml
  1. Verify ClusterIssuer
kubectl get clusterissuer <cluster-issuer-name>

You should see READY state is True

NAME                        READY   AGE
letsencrypt-http01-issuer   True    1m

Configure Cert-Manager ConfigMap

  1. Open config-certmanager ConfigMap to edit
kubectl edit configmap config-certmanager --namespace knative-serving
  1. Add issuer reference you created above with the data section

First, delete example section with the data section. Then, add the following:

  issuerRef: |
    kind: ClusterIssuer
    name: letsencrypt-http01-issuer
  1. Verify that you updated file successfully
kubectl get configmap config-certmanager --namespace knative-serving --output yaml

Turn on Auto-TLS for HTTPS

  1. Open config-network ConfigMap to edit
kubectl edit configmap config-network --namespace knative-serving
  1. Enable Auto-TLS

First, delete example section with the data section. Then, add the following.

  auto-tls: Enabled
  1. Verify that you updated file successfully
kubectl get configmap config-network --namespace knative-serving --output yaml

Verify Auto-TLS

  1. Install a dummy Knative service
kubectl apply -f https://raw.githubusercontent.com/knative/docs/main/docs/serving/autoscaling/autoscale-go/service.yaml
  1. Check for HTTPS
kubectl get ksvc autoscale-go

Output should be:

NAME           URL                                            LATESTCREATED        LATESTREADY          READY   REASON
autoscale-go   https://autoscale-go.default.harunsasmaz.com   autoscale-go-00001   autoscale-go-00001   True

Note: It might take a few minutes to provision a TLS certificate, until then you may see http.

  • If you cannot get HTTPS connection, you can check the following for debugging:
kubectl describe certificate <CERTIFICATE_NAME>


Create a new Knative service

make push-image
kubectl apply -f infrastructure/knative/service.yaml

You can check that HTTPS connection with your custom domain have been configured successfully by the following:

kubectl get ksvc

Deployment of test services

  1. Give read permission to default system account

The permission I gave described below is not encouraged, but applied for this sample project.

kubectl create clusterrolebinding admin-account \
  --clusterrole=cluster-admin \
  1. Apply definitions
cd infrastructure/kubernetes
kubectl apply -f services.yaml

Testing via cURL

By adding -v flag, you will be able to see TLS handshakes and establishing a secure connection with your cluster.

python -m json.tool command provides pretty printing json responses.

$ curl -vX GET https://api-service.default.harunsasmaz.com/services/default | python -m json.tool 

> {
    "data": [
            "applicationGroup": "beta",
            "name": "blissful-goodall",
            "runningPodsCount": 1
            "applicationGroup": "beta",
            "name": "confident-cartwright",
            "runningPodsCount": 1
            "applicationGroup": "",
            "name": "happy-colden",
            "runningPodsCount": 1
            "applicationGroup": "gamma",
            "name": "quirky-raman",
            "runningPodsCount": 1
            "applicationGroup": "alpha",
            "name": "stoic-sammet",
            "runningPodsCount": 2
    "success": true
$ curl -vX GET https://api-service.default.harunsasmaz.com/services/default/beta | python -m json.tool

> {
    "data": [
            "applicationGroup": "beta",
            "name": "blissful-goodall",
            "runningPodsCount": 1
            "applicationGroup": "beta",
            "name": "confident-cartwright",
            "runningPodsCount": 1
    "success": true