Obtain raw Kubernetes metrics with kubectl

Recently I was trying to find out the best metrics reporting and graphing solution for a hybrid Linux/Windows Kubernetes cluster.  Unfortunately, the support for Windows in many great open source tools still leaves something to be desired.

I'm still playing around with different options and will eventually settle on something (which will probably be its own blog post), but I wanted to share something I found while digging around that's pretty neat in its own regard.

Most Kubernetes deployments have the "metrics-server" pod running.  Check to see it yourself with this command:

kubectl get pods -n kube-system | grep metrics-server
kube-system          metrics-server-67b4964794-l7z5q                         1/1       Running   0          24d

If you see this pod running (it will have a different identifier at the end), then you'll be able to query kubectl to get the raw metrics for either your nodes or your pods:

	
# Get the metrics for all nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
        
# Get the metrics for all pods
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
	

If you run this, you'll get raw, unformatted JSON back, which looks terrible in a terminal.  We can use a pretty cool utility, jq, to parse it out.  Run the same commands but pipe the output through jq:

	
# Get the metrics for all nodes formatted through jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq '.'
        
# Get the metrics for all pods formatted through jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq '.'
	

This will give you nicer output, such as this for the node metrics:

	
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "k8s-master-13487264-1",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-master-13487264-1",
        "creationTimestamp": "2018-12-31T20:15:19Z"
      },
      "timestamp": "2018-12-31T20:15:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "265m",
        "memory": "2684280Ki"
      }
    },
    {
      "metadata": {
        "name": "k8s-master-13487264-2",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-master-13487264-2",
        "creationTimestamp": "2018-12-31T20:15:19Z"
      },
      "timestamp": "2018-12-31T20:15:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "189m",
        "memory": "2663640Ki"
      }
    }
  ]
}
	

Or this for the pod metrics:

	
{
  "kind": "PodMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/pods"
  },
  "items": [
      "metadata": {
        "name": "webportal-deployment-79785448db-dnvcq",
        "namespace": "mhwi285-production",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/mhwi285-production/pods/webportal-deployment-79785448db-dnvcq",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "webportal-application",
          "usage": {
            "cpu": "0",
            "memory": "270908Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "heapster-f4fbb999d-b8k6c",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/heapster-f4fbb999d-b8k6c",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "heapster",
          "usage": {
            "cpu": "0",
            "memory": "27724Ki"
          }
        },
        {
          "name": "heapster-nanny",
          "usage": {
            "cpu": "0",
            "memory": "10908Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube-apiserver-k8s-master-13487264-2",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-k8s-master-13487264-2",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "kube-apiserver",
          "usage": {
            "cpu": "20m",
            "memory": "679700Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "webportal-deployment-79785448db-6vtbt",
        "namespace": "mhwi285-production",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/mhwi285-production/pods/webportal-deployment-79785448db-6vtbt",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "webportal-application",
          "usage": {
            "cpu": "0",
            "memory": "395576Ki"
          }
        }
      ]
    }    
  ]
}
	

You'll see you get some basic metric data back - for the nodes you get the node name, the timestamp for when the metrics were gathered, CPU usage and memory usage of the node.  For the pods, you get the pod name, namespace, the timestamp the metrics were created, as well as the name, CPU usage, and memory usage for each container inside the pod.

It's important to note that these values aren't persisted anywhere, at least the historical values.  The output you get from this command will just show what the data was the last time it was collected. If you want to see historical trends of this data, you'll need to store the output yourself or use some other open source tool to get at it.

Lastly - just to show off how cool that jq tool is, here are some samples of how you can pull just specific bits of data out of the JSON and format it the way you want:

Here we get just the name, CPU, and memory usage of each node:

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes \
| jq '[.items [] | {nodeName: .metadata.name, nodeCpu: .usage.cpu, nodeMemory: .usage.memory}]'
[
  {
    "nodeName": "k8s-master-13487264-1",
    "nodeCpu": "210m",
    "nodeMemory": "2491580Ki"
  },
  {
    "nodeName": "k8s-master-13487264-2",
    "nodeCpu": "157m",
    "nodeMemory": "2465016Ki"
  },
  {
    "nodeName": "k8s-master-13487264-0",
    "nodeCpu": "137m",
    "nodeMemory": "3352384Ki"
  },
  {
    "nodeName": "1348k8s002",
    "nodeCpu": "372m",
    "nodeMemory": "1363444Ki"
  },
  {
    "nodeName": "1348k8s001",
    "nodeCpu": "242m",
    "nodeMemory": "1156788Ki"
  },
  {
    "nodeName": "1348k8s000",
    "nodeCpu": "615m",
    "nodeMemory": "1512472Ki"
  }
]

And here we can get the name and namespace of each pod, along with each container in the pod, with its CPU and memory usage:

kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods  \
| jq '[.items [] | {podName: .metadata.name, podNamespace: .metadata.namespace, containers: [{name: .containers[].name, cpu: .containers[].usage.cpu, memory: .containers[].usage.memory}]}]'

[
  {
    "podName": "kube-addon-manager-k8s-master-13487264-0",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kube-addon-manager",
        "cpu": "1m",
        "memory": "262564Ki"
      }
    ]
  },
  {
    "podName": "kubernetes-metrics-reader-deployment-78954dbf7b-llt7b",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kubernetes-metrics-reader",
        "cpu": "0",
        "memory": "52132Ki"
      }
    ]
  },
  {
    "podName": "kube-addon-manager-k8s-master-13487264-1",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kube-addon-manager",
        "cpu": "16m",
        "memory": "454192Ki"
      }
    ]
  }
]

Summary:

While you probably will want to use some well-known open source tools to actually track your metrics (Prometheus and Grafana are two pretty good ones), this is a quick and dirty way to get at some core metrics to see how your pods and nodes are performing.

Justin Carlson

Read more posts by this author.

Wisconsin