Categories
Development Operations

Logback superpowers for better observability

Logs are an essential tool for monitoring and observing system behavior in test and especially production. They provide a wealth of detailed information and are particularly useful for troubleshooting complex issues. When other tools fall short, logs are where you turn to uncover the root cause of problems.

Single log statements are important, but the true power of logs lies in being able to comprehend the information across statements. Connect them across various dimensions like requests, users, instances or countries.

Let me show you how to configure Logback to get even more data and information to analyze.

Subscribe for updates:

This configuration is for Logback, but it should be possible in most logging tools. If your tool cannot do this, consider switching to something that can. ๐Ÿ™‚

Logs are not designed to store information forever. If this information is vital for your business, consider storing it in a database or long term metrics.

1. Log to JSON

It all starts with JSON. Structured data is much easier to analyze and search than regular log lines. You can use the following tips without JSON formatting, but then you need to add each and every one (MDC/Marker) into the output format. ๐Ÿ™‚

Most logging tools (Links to: Splunk, Grafana/Loki) now have efficient tools for writing queries against JSON data. So start logging to JSON and query better (than just string matching).

Your favorite framework for development (Spring etc) might have a feature for enabling this, but if not, this is the basics for configuring Logback without a framework:

implementation("net.logstash.logback:logstash-logback-encoder:7.3")
build.gradle.kts
<?xml version="1.1" encoding="UTF-8"?>
<configuration>
    <appender name="JSON" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeMdc>true</includeMdc>
        </encoder>
    </appender>


    <root level="INFO">
        <appender-ref ref="JSON"/>
    </root>
</configuration>
logback.xml

After this you should get output like this in your logs:

{
  "@timestamp": "2023-06-17T13:41:01.134+01:00",
  "@version": "1",
  "message": "Hello World",
  "logger_name": "no.f12.application",
  "userId": "user-id-something",
  "documentId": "document-id",
  "documentType": "legal",
  "thread_name": "somethread",
  "level": "INFO",
  "level_value": 20000
}
JSON

Notice the userId, documentId, documentType attributes added. Those are MDC and Marker values. Read on to figure out how to add them. ๐Ÿ™‚

If you miss the old format you can have different output in tests, and even output the old to a file. Just remember to log JSON in your environments. ๐Ÿ™‚

2. Use the Mapped Diagnostic Context to add contextual information

The Mapped Diagnostic Context (MDC) is a powerful tool in logging, allowing you to add contextual information to log statements. When you put something in the MDC, it will be automatically added to all log statements in the same thread or request. This feature can greatly enhance the clarity and usefulness of your log output, providing additional context for troubleshooting and analysis.

Most servers and frameworks should have automatic support for managing MDC. Some can also add extra information, like this KTor example or something similar in Spring Boot. If you can not find a default support in your tool all you have to do (in a filter or just in the controller) is:

MDC.put("userId", headers.getUserIdFromJwt())
Kotlin

Voila! It will be included with the other log statements related to that request. You can write a query in your preferred log tool to display all log entries for the past hour for a particular user. Just remember to review the documentation regarding clean up if you are not using the support in your framework or server.

I use MDC for a few different things in different contexts, but you can add things that helps you diagnose across different requests:

  • The request id (and trace id)
  • The user id
  • The company id
  • The user country and location
  • The datacenter

Note that a request ID is closely connected to tracing identifiers like W3C trace context and Opentrace identifiers. I usually suggest having a request ID as one field and also including something similar to this:

MDC.put("traceId", headers.getTraceId())
Kotlin

If you look at the JSON configuration above I have added the includeMdc parameter to the JSON output. That way any MDC parameters are automatically added to the log statements.

3. Use Logback Markers to add structured data to a log-statement

Just like MDC, Markers is Logbacks way of adding additional data to a log statement. But this only applies to that specific statement, where you usually have more specific data than at places in your code where you do MDC.

It works best with JSON (again automatically added as noted before), and it opens up possibilities for querying and analysis.

To add additional data to a log statement:

// Kotlin helpers for creating a map of key, value
val markers = Markers.appendEntries(mapOf(
  "documentId" to document.id,
  "documentType" to document.type
))
logger.info(markers, "Document retrieved")
Kotlin

This requires a dependency like this:

implementation("net.logstash.logback:logstash-logback-encoder:7.4")
build.gradle.kts

And you will get it added to the JSON structure for querying. Here’s the output example again:

{
  "@timestamp": "2023-06-17T13:41:01.134+01:00",
  "@version": "1",
  "message": "Hello World",
  "logger_name": "no.f12.application",
  "userId": "user-id-something",
  "documentId": "document-id",
  "documentType": "legal",
  "thread_name": "somethread",
  "level": "INFO",
  "level_value": 20000
}
JSON

To Marker or not?

It may be a bit unclear what sets Markers and MDC apart, as they both serve similar purposes. Don’t worry, though, as you’ll understand which one to use as you gain more experience them.

Generally, I recommend to start with MDC. You can use it in situations such as “we’re currently updating the document with id” or “this payment is being processed now.”

On the other hand, Markers are suitable when you only require the details provided by a single log statement.

They are both powerful tools. Good luck. ๐Ÿ™‚


What’s your favorite tips for logging? Let me know in the comments below.

Commodore 'Tractor' printer 4022P (Digital Computers)
Commodore ‘Tractor’ printer 4022P (Digital Computers) by Commodore Business Machines is licensed under CC-BY-NC-SA 4.0
Categories
Development Operations

Write awesome CLIs!

It’s time to start writing kick ass CLIs instead of hacking scripts! ๐Ÿ™‚ It’s a lot easier than you might think.

If you’re impatient just scroll to the bottom for a link to the code in Github. ๐Ÿ™‚

All those scripts

I see a lot of scripts around, but they usually suffer from many of these problems:

  • Missing or bad error handling
  • Limited input validation
  • Clumsy parameter handling
  • No testing, so every change requires testing all input combinations. Not to mention different state on the hard drive.
  • Copy and paste code. It’s hard to re-use libraries in scripts, even though a lot exists.
  • Implicit dependencies to OS and OS packages

I’ve done way too much of this in my time, and I have felt the pain of maintaining 16k lines of Bash code (I know, stupid). So I started looking for something better…

What I wanted

Coming from the developer side of things I’m really used to making third party libraries do a lot of the heavy lifting for me. It felt really awkward that there were no proper way to do this when creating tools for the command line. So I set out to look for:

  • A good way to define the Command Line Interface
  • Proper error handling
  • Test frameworks to enable automated testing
  • A way to package everything together with dependencies

In addition; I really wanted to do some automated testing. I hate writing code without knowing instantly that it performs as I believe it does. You might be differently inclined. ๐Ÿ˜‰

Solutions?

There are many ways of doing this, but the only ones I’ve been able to get some real experience with are Python and Java. I would really like to learn Go, but it’s usually not politically viable and would take some time to learn.

I did maintain and develop a CLI in Python for a good while. And I really like the Python  language and all the awesome third party libraries available. But I always found it lacking in the distribution part. We were under certain (networking) constraints, so downloading stuff from PyPi was NOT and option. It took quite a lot of hacking with Virtualenv and Pip to set up some kind of infrastructure that enabled us to distribute our CLI with it’s dependencies. YMMV. ๐Ÿ™‚

But all these hoops we were jumping through with Python made me think about what is great about Java. The classpath. ๐Ÿ˜‰ Yeah, I know, I know. You all hate the classpath. But that’s because it’s been abused by the Java EE vendors through all these years. It’s really quite awesome, just make sure you take full control of it.

Java with some help from friends (see the details further down) would let me package it all up and create a truly cross platform single binary with all dependencies included (JRE required)! It even starts fast! (Unless you overload it with all kinds of Spring+Hibernate stuff. That’s on you.). And even though it sounds like something a masochist would do; it is actually kick ass. Try it. ๐Ÿ™‚

If you don’t do this in Java; use Docopt (available in many languages). You should write CLIs and keep your build tool simple (dependencies, versioning, packaging).  I’ve seen way too much tooling shoe horned into different build tools. Write CLIs for the stuff not related to building and use the right tool for the right task.

Test a Java CLI

If you just want to try how fast and easy it actually works (you’ll need a JRE on your path):

$ curl https://dl.dropboxusercontent.com/u/122923/executable-json-util-1.0-SNAPSHOT.jar > ~/bin/json-util && chmod u+x ~/bin/json-util
$ json-util
Usage:
  json-util animate me
  json-util say [--encrypt=] 

In case you did not catch that:

  • We downloaded a jar, and saved it as a regular binary and saved to ~/bin.
  • We executed it and didn’t give any parameters so it printed the help text.

It’s a really simple (and stupid) example, but to invoke some “real” functionality you can do:

$ json-util say "Hello blog!"
Hello blog!

$ json-util --encrypt=rot13 "Hello blog!"
Uryyb oybt!

Neat! Write the utils you need to be effective in a language you know, with the tooling you know (this util is created with Maven). And write some F-ing tests while you’re at it. ๐Ÿ˜‰

Tell me more, tell me more…

The things that makes writing CLIs in Java fun, easy and robust is:

  • Java. ๐Ÿ™‚ Alright, alright. Maybe not the best language for this stuff. But the new IO APIs and the Streams with Lambdas in Java 8 helps a lot. And it’s typed… If you’re into that kind of stuff. ๐Ÿ™‚ You can of course do this in Groovy or anything else that runs on the JVM, but be aware that many of those languages takes some time to bootstrap and you’ll notice that every time you run the CLI.
  • Maven-shade-plugin. It packages your code together with all the dependencies to one binary.
  • Maven-really-executable-jars-plugin. It modifies the single jar with a zip-compliant header that lets you skip the “java -jar …” part of executing it every time.
  • Docopt-java. It makes writing, validating and parsing command line arguments extremely easy and fun.
  • Docopt-completion. Once you have your kick ass CLI, add some kick ass tab-completion. ๐Ÿ˜‰

Show me code!

You can see an example of all of this (Java and Maven required) at: https://github.com/anderssv/executable-json-util .

Categories
Architecture Operations

Microservices reading list – part II

A little while ago I summarized some great articles and experiences with microservices. As it continues it’s travel along the hype cycle new excellent (and balanced) stuff is coming out, so I just thought I would add some of them.

I think these articles go more in depth about the trade offs and reasons behind choosing to build a microservices architecture. So read this postย before you go back and review the ones in the first part. ๐Ÿ™‚

These are the guys that had videos in my first post. Go there to see them. ๐Ÿ™‚