Member-only story

Tailing Kubernetes JSON Logs? jq can help

Matt Kornfield
2 min readNov 26, 2024

A bit of JSON parsing goes a long way

Photo by Robert Larsson on Unsplash

The Problem

Do you have logs that look like this? Maybe you’re just tailing logs from a Kubernetes pod, such as kubectl logs -f my-pod-12345 and you see something like the following:

{"timestamp": "21031", "level":"warn", "message": "something go brr", "user_context":"1234"}
{"timestamp": "21031", "level": "info", "message": "something else go brr", "user_context": "1234"}

Or maybe if they’re logs with so many fields that they wrap, so that you can barely tell what’s happening.

What can we do to handle these annoying log streams? Why didn’t we just use logfmt?!

Fear not! I have a simple solution: jq

The Solution

This is a relatively complex jq function, since it handles the case when the line isn’t valid JSON, but behold!

kubectl logs -f my-pod-123 |
jq -Rr '. as $line | try (fromjson | [.timestamp, .level, .message] | @tsv ) catch $line'

To break this down specifically:

'. as $line | try (fromjson | [.timestamp, .level, .message] | @tsv ) catch $line'

  1. the . as $line sets the text to a variable, $line, which is what’s being read from the…

--

--

Matt Kornfield
Matt Kornfield

Written by Matt Kornfield

Today's solutions are tomorrow's debugging adventure.

No responses yet