Getting rid of legacy systems with Micro Services and Event Driven Architectures

Auteur
Sander van der Sloot
Datum

Getting rid of legacy systems with Micro Services and Event Driven Architectures

Your legacy systems do not support your business goals any longer? Your legacy systems stall your Digital Transformation? This article will outline a strategy to phase out your legacy systems!

The term legacy is sometimes a pejorative term. This is not always fair to very well-functioning systems. However, as soon as a system's life-cycle becomes expensive, more than it ever was, or it fails to keep up to ever changing (online) needs, then we'd like to migrate it to a system based on contemporary standards - and thus call it legacy.

Those standards trend towards agility, automated scaling-ability, off-premise, decoupled (autonomous) components, and so on. Older systems are usually on the opposite side of the spectrum. So, if you want to be a Pure Internet Player like Amazon or Netflix, you might also want to be an early adopter of new technologies, or want to scale just relevant parts of your system, have multiple teams work on the product to make small changes in a frequent release cycle. All in reach if you would adopt Micro Services as an architectural approach.

Definition In this article we look at a legacy system as an outdated computer system or in need of replacement.

How the system used to work

To find out how to get rid of the legacy and make the first steps towards your business goals through architectural changes, let's first determine what the situation and problem is. In the first cartoon we see a chimp as an API serving the internal customer the information as a banana he needs from the tree-system.

1 Microsystems story 144 klein

When legacy is a problem

A legacy system can’t support the load, knowledge may be lost, vendors may not exist anymore, they contain core/essential data/rules, no (security) updates possible, outdated hardware, slow or even no development takes place, high Total Cost of Ownership (TCO). In short, the system becomes an increasing business risk. Especially when the number of internal and external users increase rapidly.

2 Microsystems story 144 klein

Often the following patterns are tried to solve the problem, but seem only delay of the inevitable. (Note: Stereotype solutions in hardware.)

  • Large caches in between the system and actors that request the data
  • High computational power to prevent the system to halt
  • Highly redundant infrastructure to provide all kinds of fail-over scenarios

Solution Statement

Step by step eat your legacy system by creating Micro Services and/or micro-events from the legacy system to put its (useful) data in modern storage facilities and recreate common web-oriented user-interfaces on top of it.

Solution

Finding a solution pattern depends on the combination of three properties of legacy systems.

  1. The legacy data model is known or a development environment exists

  2. Data is not mutated through the (legacy) system any longer

  3. A well-known data storage solution is used (e.g. Microsoft SQL; not a custom object store)

Let's look into a typical scenario based on a case.

Case

In this company there were various legacy ERP-like systems containing all sorts of information (the vendor of one of the systems even went bankrupt years ago). As described above those systems were not designed to be accessed on a scale of potentionally milions of online users. To fulfill the inevitable online need, they'd created an API as middleware of the online interaction with their back-end. Some of the systems contained all kinds of business logic and data that no-one knew it was even there. Maintaining those applications therefore was quite risky, especially for the IT Department being responsible for their functioning. Developing new business functionalities became very long running projects as soon as the ideas included backend information. In this case the properties one and three apply: (1) still development going on and (3) the data models are known as well as their Database Management System.

The solution for this case was to find an autonomous function or domain and redesign its user interaction and user interface first. The contract was implemented in a Micro Service and its data store was fed by events coming from the appropriate triggers in the legacy system created for this purpose. This cycle was repeated until all desired functionality was re-created in a web-oriented interface.

The cartoon depicts this as a frog per (business) function/value stream/domain in which the Legacy system's role and its API diminishes.

3 Microsystems story 144 klein

Data mutation was still done through the original API that was in place to expose the data in the first place. Mutations were also kept in the Micro Services, but events of the same entities coming from the legacy system were dominant.

As soon as there were no actors anymore changing data directly within the system, the existng API was untied and the legacy system was disposed !

A suitable name for this solution pattern could be: "Use eventing to build new data stores within Micro Services". In the cartoon the system and its API are retired :-)

4 Microsystems story 144 klein

System (re)Design Approach

All solutions ought to use the following order for system design iterations - in short cycles.

  1. Design the user-interaction (maybe one for each actor), then
  2. The business rules followed by
  3. The API-contract and
  4. Implement a Micro Service to fulfill the contract and at last (if applicable)
  5. A set of events or an API to extract the data.

Following this order a system is recreated exactly with - and no more than - the currently needed data and business rules, while ignoring old rules and false, inconsistent or duplicate data ! Find out which of the properties apply to your situation and read the epilogue below for a more thorough elaboration on each possible scenario.

Conclusion

Basically, when the data of the legacy system does not change anymore a set of Micro Services forms a temporarily layer until all needed data is extracted over time. When data does change also create events by setting triggers upon updates of fields. If the data model is not known there's no other way than to invest in obtaining this knowledge first.

Advantages of this approach: There are no large risky migration needed. On the contrary, the shift will be seamless, clean and smooth. Users will gradually use the newly created system, until no-one uses the legacy system. Then the remains of the legacy system can be disposed while only the necessary data and business rules are extracted into the new system. A great Java example can be found here

So, in this way you are softly getting rid of legacy systems with Micro Services and an Event Driven Architecture (and silently moving towards to what is now a disruptive architecture). What's left is a suite of Micro Services serving as one new application.

Epilogue: Solution Patterns

Below are the properties in a table and the solution pattern it suits, followed by a detailed look into each scenario. Those are possible variations on the typical solution descibed in the case.

N Datamodel is known/development environment exists Data is not mutated through the system any longer A well known data storage solution is used Solution
1 X O O Create a Micro Services Layer to extract data as an API
2 O X O Create a Micro Services layer upon the existing model
3 O O X Use eventing to create a new system
4 X X O Create a Micro Services Layer to extract data as an API, implement (refactored) business rules on extracted data
5 O X X Use storage of events to build a new system (API) and connect to new system
6 X O X Use eventing to build new data stores
7 X X X Create a Micro Service layer to extract data through events of changes from the database as an API
8 O O O Start all over :-(

1. Create a Micro Services Layer to extract data as an API

Still needs large capacity of legacy system. If the software is still under maintenance then implement the Micro Services as events on updates. Implement (refactored) business rules on extracted data. Create an user interface with according business rules in parallel with the Micro Service to seamlessly dissolve the legacy interface. Dispose the legacy as soon as the new system meets current needs.

2. Create a Micro Services layer upon the existing model

This is not trivial! Invest in retrieving the knowledge, even by hiring pensioned people. Then one-time-migration can be a solution, but it also contains the flaws, faults and inconsistencies. Better create a Micro Services 'layer' upon the model and continue as in solution one.

3. Use eventing to create a new system

Create triggers to throw events and store them to build a new model that supports the user-interfaces, business rules and API contracts needed until the new system covers all needs. Then dispose the legacy system.

4. Create a Micro Services Layer to extract data as an API, implement (refactored) business rules on extracted data

Create a Micro Services layer to extract data as an API. Still needs large capacity of legacy system. Implement (refactored) business rules on extracted data. Create an user interface with according business rules in parallel with the Micro Service to seamlessly dissolve the legacy interface. Dispose the legacy as soon as the new system meets current needs.

5. Use eventing to build a new system (API) and connect to new system

Without understanding the model, create triggers to throw events and store them to build a new model that supports the user-interfaces, business rules and API contracts needed. As an alternative recreate the model and business rules if still applicable.

6. Use eventing to build new data stores

Create triggers to throw events and store them to build a new model that supports the user-interfaces, business rules and API contracts needed. As an alternative recreate the model and business rules if still applicable.

7. Create a Micro Services layer to extract data directly from the database as an API

The easiest solution: Create a Micro Services layer to extract data directly from the database as an API. Dispose the legacy system immediately and the database as soon as the new system meets current needs. Migrating would also include flaws, faults and inconsistencies.

Tags