3.1. Hubble

Before we start with the CNI functionality of Cilium and its security components we want to enable the optional Hubble component (which is disabled by default). So we can take full advantage of its eBFP observability capabilities.

Task 3.1.1: Install the Hubble CLI

Similar to the cilium CLI, the hubble CLI interfaces with Hubble and allows observing network traffic within Kubernetes.

So let us install the hubble CLI.

Linux/Webshell Setup

Execute the following command to download the hubble CLI:

curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/v0.11.1/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}

macOS Setup

Execute the following command to download the hubble CLI:

curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/v0.11.1/hubble-darwin-amd64.tar.gz{,.sha256sum}
shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
sudo tar xzvfC hubble-darwin-amd64.tar.gz /usr/local/bin
rm hubble-darwin-amd64.tar.gz{,.sha256sum}

Hubble CLI

Now that we have the hubble CLI let’s have a look at some commands:

hubble version

should show

hubble 0.11.1 compiled with go1.19.5 on linux/amd64

or

hubble help

should show

Hubble is a utility to observe and inspect recent Cilium routed traffic in a cluster.

Usage:
  hubble [command]

Available Commands:
  completion  Output shell completion code
  config      Modify or view hubble config
  help        Help about any command
  list        List Hubble objects
  observe     Observe flows of a Hubble server
  status      Display status of Hubble server
  version     Display detailed version information

Global Flags:
      --config string   Optional config file (default "/home/user/.config/hubble/config.yaml")
  -D, --debug           Enable debug messages

Get help:
  -h, --help    Help for any command or subcommand

Use "hubble [command] --help" for more information about a command.

Task 3.1.2: Deploy a simple application

Before we enable Hubble in Cilium we want to make sure we have at least one application to observe.

Let’s have a look at the following resource definitions:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend-container
        image: docker.io/byrnedo/alpine-curl:0.1.8
        imagePullPolicy: IfNotPresent
        command: [ "/bin/ash", "-c", "sleep 1000000000" ]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: not-frontend
  labels:
    app: not-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: not-frontend
  template:
    metadata:
      labels:
        app: not-frontend
    spec:
      containers:
      - name: not-frontend-container
        image: docker.io/byrnedo/alpine-curl:0.1.8
        imagePullPolicy: IfNotPresent
        command: [ "/bin/ash", "-c", "sleep 1000000000" ]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend-container
        env:
        - name: PORT
          value: "8080"
        ports:
        - containerPort: 8080
        image: docker.io/cilium/json-mock:1.2
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  labels:
    app: backend
spec:
  type: ClusterIP
  selector:
    app: backend
  ports:
  - name: http
    port: 8080

The application consists of two client deployments (frontend and not-frontend) and one backend deployment (backend). We are going to send requests from the frontend and not-frontend pods to the backend pod.

Create a file simple-app.yaml with the above content.

Deploy the app:

kubectl apply -f simple-app.yaml

this gives you the following output:

deployment.apps/frontend created
deployment.apps/not-frontend created
deployment.apps/backend created
service/backend created

Verify with the following command that everything is up and running:

kubectl get all,cep,ciliumid
NAME                               READY   STATUS    RESTARTS   AGE
pod/backend-65f7c794cc-b9j66       1/1     Running   0          3m17s
pod/frontend-76fbb99468-mbzcm      1/1     Running   0          3m17s
pod/not-frontend-8f467ccbd-cbks8   1/1     Running   0          3m17s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/backend      ClusterIP   10.97.228.29   <none>        8080/TCP   3m17s
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    45m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/backend        1/1     1            1           3m17s
deployment.apps/frontend       1/1     1            1           3m17s
deployment.apps/not-frontend   1/1     1            1           3m17s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/backend-65f7c794cc       1         1         1       3m17s
replicaset.apps/frontend-76fbb99468      1         1         1       3m17s
replicaset.apps/not-frontend-8f467ccbd   1         1         1       3m17s

NAME                                                    ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4         IPV6
ciliumendpoint.cilium.io/backend-65f7c794cc-b9j66       144           67823                                                                        ready            10.1.0.44
ciliumendpoint.cilium.io/frontend-76fbb99468-mbzcm      1898          76556                                                                        ready            10.1.0.161
ciliumendpoint.cilium.io/not-frontend-8f467ccbd-cbks8   208           127021                                                                       ready            10.1.0.128

NAME                              NAMESPACE     AGE
ciliumidentity.cilium.io/127021   default       3m15s
ciliumidentity.cilium.io/67688    kube-system   41m
ciliumidentity.cilium.io/67823    default       3m15s
ciliumidentity.cilium.io/76556    default       3m15s

Let us make life a bit easier by storing the pods name into an environment variable so we can reuse it later again:

export FRONTEND=$(kubectl get pods -l app=frontend -o jsonpath='{.items[0].metadata.name}')
echo ${FRONTEND}
export NOT_FRONTEND=$(kubectl get pods -l app=not-frontend -o jsonpath='{.items[0].metadata.name}')
echo ${NOT_FRONTEND}

Task 3.1.3: Enable Hubble in Cilium

When you install Cilium using Helm, then Hubble is already enabled. The value for this is hubble.enabled which is set to true in the values.yaml of the Cilium Helm Chart. But we also want to enable Hubble Relay. With the following Helm command you can enable Hubble with Hubble Relay:

helm upgrade -i cilium cilium/cilium --version 1.12.10 \
  --namespace kube-system \
  --set ipam.operator.clusterPoolIPv4PodCIDRList={10.1.0.0/16} \
  --set cluster.name=cluster1 \
  --set cluster.id=1 \
  --set operator.replicas=1 \
  --set upgradeCompatibility=1.11 \
  --set kubeProxyReplacement=disabled \
  `# hubble and hubble relay variables:` \
  --set hubble.enabled=true \
  --set hubble.relay.enabled=true \
  --wait

If you have installed Cilium with the cilium CLI then Hubble component is not enabled by default (nor is Hubble Relay). You can enable Hubble using the following cilium CLI command:

# cilium hubble enable

and then wait until Hubble is enabled:

🔑 Found existing CA in secret cilium-ca
✨ Patching ConfigMap cilium-config to enable Hubble...
♻️  Restarted Cilium pods
⌛ Waiting for Cilium to become ready before deploying other Hubble component(s)...
🔑 Generating certificates for Relay...
✨ Deploying Relay from quay.io/cilium/hubble-relay:v1.12.10...
⌛ Waiting for Hubble to be installed...
✅ Hubble was successfully enabled!

When you have a look at your running pods with kubectl get pod -A you should see a Pod with a name starting with hubble-relay:

kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
default       backend-6f884b6495-v7bvt           1/1     Running   0             52s
default       frontend-77d99ffc5d-lcsph          1/1     Running   0             52s
default       not-frontend-7db9747986-snjwp      1/1     Running   0             52s
kube-system   cilium-ksr7h                       1/1     Running   0             9m16s
kube-system   cilium-operator-6f5c6f768d-r2qgn   1/1     Running   0             9m17s
kube-system   coredns-6d4b75cb6d-nf8wz           1/1     Running   0             22m
kube-system   etcd-cluster1                      1/1     Running   0             22m
kube-system   hubble-relay-84b4ddb556-nr7c8      1/1     Running   0             10s
kube-system   kube-apiserver-cluster1            1/1     Running   0             22m
kube-system   kube-controller-manager-cluster1   1/1     Running   0             22m
kube-system   kube-proxy-7l6qk                   1/1     Running   0             22m
kube-system   kube-scheduler-cluster1            1/1     Running   0             22m
kube-system   storage-provisioner                1/1     Running   1 (21m ago)   22m

Cilium agents are restarting, and a new Hubble Relay pod is now present. We can wait for Cilium and Hubble to be ready by running:

cilium status --wait

which should give you an output similar to this:

    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         OK
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
                  hubble-relay       Running: 1
Cluster Pods:     9/9 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.11.2@sha256:ea677508010800214b0b5497055f38ed3bff57963fa2399bcb1c69cf9476453a: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.11.2@sha256:b522279577d0d5f1ad7cadaacb7321d1b172d8ae8c8bc816e503c897b420cfe3: 1
                  hubble-relay       quay.io/cilium/hubble-relay:v1.11.2@sha256:306ce38354a0a892b0c175ae7013cf178a46b79f51c52adb5465d87f14df0838: 1

Hubble is now enabled. We can now locally port-forward to the Hubble pod:

 cilium hubble port-forward&

And then check Hubble status via the Hubble CLI (which uses the port-forwarding just opened):

hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 947/4095 (23.13%)
Flows/s: 3.84
Connected Nodes: 1/1

The Hubble CLI is now primed for observing network traffic within the cluster.

Task 3.1.4: Observing flows with Hubble

We now want to use the hubble CLI to observe some network flows in our Kubernetes cluster. Let us have a look at the following command:

hubble observe

which gives you a list of network flows:

Nov 23 14:49:03.030: 10.0.0.113:46274 <- kube-system/hubble-relay-f6d85866c-csthd:4245 to-stack FORWARDED (TCP Flags: ACK, PSH)
Nov 23 14:49:03.030: 10.0.0.113:46274 -> kube-system/hubble-relay-f6d85866c-csthd:4245 to-endpoint FORWARDED (TCP Flags: RST)
Nov 23 14:49:04.011: 10.0.0.113:44840 <- 10.0.0.114:4240 to-stack FORWARDED (TCP Flags: ACK)
Nov 23 14:49:04.011: 10.0.0.113:44840 -> 10.0.0.114:4240 to-endpoint FORWARDED (TCP Flags: ACK)
Nov 23 14:49:04.226: 10.0.0.113:32898 -> kube-system/coredns-558bd4d5db-xzvc9:8080 to-endpoint FORWARDED (TCP Flags: SYN)
Nov 23 14:49:04.226: 10.0.0.113:32898 <- kube-system/coredns-558bd4d5db-xzvc9:8080 to-stack FORWARDED (TCP Flags: SYN, ACK)
Nov 23 14:49:04.227: 10.0.0.113:32898 -> kube-system/coredns-558bd4d5db-xzvc9:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Nov 23 14:49:04.227: 10.0.0.113:32898 -> kube-system/coredns-558bd4d5db-xzvc9:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Nov 23 14:49:04.227: 10.0.0.113:32898 <- kube-system/coredns-558bd4d5db-xzvc9:8080 to-stack FORWARDED (TCP Flags: ACK, PSH)
Nov 23 14:49:04.227: 10.0.0.113:32898 -> kube-system/coredns-558bd4d5db-xzvc9:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Nov 23 14:49:04.227: 10.0.0.113:32898 <- kube-system/coredns-558bd4d5db-xzvc9:8080 to-stack FORWARDED (TCP Flags: ACK, FIN)
Nov 23 14:49:04.227: 10.0.0.113:32898 -> kube-system/coredns-558bd4d5db-xzvc9:8080 to-endpoint FORWARDED (TCP Flags: ACK)
Nov 23 14:49:04.842: 10.0.0.113:34716 -> kube-system/coredns-558bd4d5db-xzvc9:8181 to-endpoint FORWARDED (TCP Flags: SYN)
Nov 23 14:49:04.842: 10.0.0.113:34716 <- kube-system/coredns-558bd4d5db-xzvc9:8181 to-stack FORWARDED (TCP Flags: SYN, ACK)
Nov 23 14:49:04.842: 10.0.0.113:34716 -> kube-system/coredns-558bd4d5db-xzvc9:8181 to-endpoint FORWARDED (TCP Flags: ACK)
Nov 23 14:49:04.842: 10.0.0.113:34716 -> kube-system/coredns-558bd4d5db-xzvc9:8181 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Nov 23 14:49:04.842: 10.0.0.113:34716 <- kube-system/coredns-558bd4d5db-xzvc9:8181 to-stack FORWARDED (TCP Flags: ACK, PSH)
Nov 23 14:49:04.843: 10.0.0.113:34716 <- kube-system/coredns-558bd4d5db-xzvc9:8181 to-stack FORWARDED (TCP Flags: ACK, FIN)
Nov 23 14:49:04.843: 10.0.0.113:34716 -> kube-system/coredns-558bd4d5db-xzvc9:8181 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Nov 23 14:49:05.971: kube-system/hubble-relay-f6d85866c-csthd:40844 -> 192.168.49.2:4244 to-stack FORWARDED (TCP Flags: ACK, PSH)

with

hubble observe -f

you can observe and follow the currently active flows in your Kubernetes cluster. Stop the command with CTRL+C.

Let us produce some traffic:

for i in {1..10}; do
  kubectl exec -ti ${FRONTEND} -- curl -I --connect-timeout 5 backend:8080
  kubectl exec -ti ${NOT_FRONTEND} -- curl -I --connect-timeout 5 backend:8080
done

We can now use the hubble CLI to filter traffic we are interested in. Here are some examples to specifically retrieve the network activity between our frontends and backend:

hubble observe --to-pod backend
hubble observe --namespace default --protocol tcp --port 8080

Note that Hubble tells us the action, here FORWARDED, but it could also be DROPPED. If you only want to see DROPPED traffic. You can execute

hubble observe --verdict DROPPED

For now this should only show some packets that have been sent to an already deleted pod. After we configured NetworkPolicies we will see other dropped packets.