Replacing legacy systems: Proxy to learn and control

TL;DR – Create a custom proxy to make sure you know how the old system behaves and can transition off it in a controlled manner.

A couple of weeks back I held a talk at JavaZone about replacing legacy systems (Norwegian only, sorry). One of the techniques I talked about that seemed to spark an interest was creating a custom proxy to take control and figure out what the old system does.

Not a dedicated proxy like Nginx or Apache, a custom proxy in the language and technology you are comfortable with. It can ease the replacement and transition a lot.

For us that was KTor Server+Client.


When you have a system with a Webapp, Mobileapp and API like this:

You can take control of the system by introducing a proxy like this:

Simple as that. 😉 HTTP request in, HTTP request out, receive response and send response back to the source. A bit of fiddling around, but you essentially copy everything forward and back. Headers, Method, response codes and of course body.

The figure above also shows how we used the proxy to double check some data results in the DB before switching out the legacy system (the dotted lines).


By creating a custom proxy you can monitor, log, analyze and even compare results. All in your preferred development language.

When you gain insights into how your system is working, you can manage which parts controls and receives each call.

Wan’t to switch over based on user-id? That’s a simple conditional statement in your proxy code.

Switch gradually based on endpoints? Even easier, just change which system is the recipient for your call.

Subscribe new content:


We used KTor Server+Client because it made it easier to know that coroutines were handled correctly but you could also use OKHttp as a client if you wanted.

Be careful with big payloads though, stream the results as a default, and only load the body into memory if you know the size and actually need to analyze or modify (don’t) it.

I don’t have the proxy as an example myself but this doesn’t look like a bad starting point:


Yeah, that’s harder. But still just streams of bytes back and forth. Even Cobol Copybooks has a format that can be parsed. 🙂

Happy migrating 😉

What’s your best techniques for migrating off legacy systems?

Architecture Development Tech Web

Full stack development with KTor and HTMX ❤️

Sometimes it feels like I have been yak shaving for the last 20 years. Well, not entirely. But things like Rules Engines, SOA, App Servers and debugging Hibernate comes to mind. 😉

On the back end, things have gotten better over time. But on the front end it, feels a bit like it is history repeating. While it is impressive what you can do in React. React and all the accompanying tools is too complex for many normal web applications.

So when I discovered HTMX it really tickled my interest to see how I could combine this into an efficient development flow. Back to basics. Hypermedia and HTML. 🙂

And I think I have a pretty decent starting point in the Github repo linked below. 🙂

A full stack environment with Kotlin, KTor and HTMX. Everything loaded, compiled and packaged with Gradle. Compile in IDEA to see the changes full stack.

The following repo contains a running example (just do ./gradlew run to test it) and a semi-detailed Readme:

You can try the “application” here (initial load might be a little slow as it is on free which suspends inactive processes):

This holds real promise, and I hope I get to work with this for real in production soon. 🙂

Thoughts? 🙂

Subscribe for more:

Development Operations

Logback superpowers for better observability

Logs are an essential tool for monitoring and observing system behavior in test and especially production. They provide a wealth of detailed information and are particularly useful for troubleshooting complex issues. When other tools fall short, logs are where you turn to uncover the root cause of problems.

Single log statements are important, but the true power of logs lies in being able to comprehend the information across statements. Connect them across various dimensions like requests, users, instances or countries.

Let me show you how to configure Logback to get even more data and information to analyze.

Subscribe for updates:

This configuration is for Logback, but it should be possible in most logging tools. If your tool cannot do this, consider switching to something that can. 🙂

Logs are not designed to store information forever. If this information is vital for your business, consider storing it in a database or long term metrics.

1. Log to JSON

It all starts with JSON. Structured data is much easier to analyze and search than regular log lines. You can use the following tips without JSON formatting, but then you need to add each and every one (MDC/Marker) into the output format. 🙂

Most logging tools (Links to: Splunk, Grafana/Loki) now have efficient tools for writing queries against JSON data. So start logging to JSON and query better (than just string matching).

Your favorite framework for development (Spring etc) might have a feature for enabling this, but if not, this is the basics for configuring Logback without a framework:

<?xml version="1.1" encoding="UTF-8"?>
    <appender name="JSON" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">

    <root level="INFO">
        <appender-ref ref="JSON"/>

After this you should get output like this in your logs:

  "@timestamp": "2023-06-17T13:41:01.134+01:00",
  "@version": "1",
  "message": "Hello World",
  "logger_name": "no.f12.application",
  "userId": "user-id-something",
  "documentId": "document-id",
  "documentType": "legal",
  "thread_name": "somethread",
  "level": "INFO",
  "level_value": 20000

Notice the userId, documentId, documentType attributes added. Those are MDC and Marker values. Read on to figure out how to add them. 🙂

If you miss the old format you can have different output in tests, and even output the old to a file. Just remember to log JSON in your environments. 🙂

2. Use the Mapped Diagnostic Context to add contextual information

The Mapped Diagnostic Context (MDC) is a powerful tool in logging, allowing you to add contextual information to log statements. When you put something in the MDC, it will be automatically added to all log statements in the same thread or request. This feature can greatly enhance the clarity and usefulness of your log output, providing additional context for troubleshooting and analysis.

Most servers and frameworks should have automatic support for managing MDC. Some can also add extra information, like this KTor example or something similar in Spring Boot. If you can not find a default support in your tool all you have to do (in a filter or just in the controller) is:

MDC.put("userId", headers.getUserIdFromJwt())

Voila! It will be included with the other log statements related to that request. You can write a query in your preferred log tool to display all log entries for the past hour for a particular user. Just remember to review the documentation regarding clean up if you are not using the support in your framework or server.

I use MDC for a few different things in different contexts, but you can add things that helps you diagnose across different requests:

  • The request id (and trace id)
  • The user id
  • The company id
  • The user country and location
  • The datacenter

Note that a request ID is closely connected to tracing identifiers like W3C trace context and Opentrace identifiers. I usually suggest having a request ID as one field and also including something similar to this:

MDC.put("traceId", headers.getTraceId())

If you look at the JSON configuration above I have added the includeMdc parameter to the JSON output. That way any MDC parameters are automatically added to the log statements.

3. Use Logback Markers to add structured data to a log-statement

Just like MDC, Markers is Logbacks way of adding additional data to a log statement. But this only applies to that specific statement, where you usually have more specific data than at places in your code where you do MDC.

It works best with JSON (again automatically added as noted before), and it opens up possibilities for querying and analysis.

To add additional data to a log statement:

// Kotlin helpers for creating a map of key, value
val markers = Markers.appendEntries(mapOf(
  "documentId" to,
  "documentType" to document.type
)), "Document retrieved")

This requires a dependency like this:


And you will get it added to the JSON structure for querying. Here’s the output example again:

  "@timestamp": "2023-06-17T13:41:01.134+01:00",
  "@version": "1",
  "message": "Hello World",
  "logger_name": "no.f12.application",
  "userId": "user-id-something",
  "documentId": "document-id",
  "documentType": "legal",
  "thread_name": "somethread",
  "level": "INFO",
  "level_value": 20000

To Marker or not?

It may be a bit unclear what sets Markers and MDC apart, as they both serve similar purposes. Don’t worry, though, as you’ll understand which one to use as you gain more experience them.

Generally, I recommend to start with MDC. You can use it in situations such as “we’re currently updating the document with id” or “this payment is being processed now.”

On the other hand, Markers are suitable when you only require the details provided by a single log statement.

They are both powerful tools. Good luck. 🙂

What’s your favorite tips for logging? Let me know in the comments below.

Commodore 'Tractor' printer 4022P (Digital Computers)
Commodore ‘Tractor’ printer 4022P (Digital Computers) by Commodore Business Machines is licensed under CC-BY-NC-SA 4.0