Services

Decomposing the database for microservices

There are many ways to extract functionality into microservices. However, we need to address the elephant in the room—namely, what do we do about our data? Microservices work best when we practice information hiding, which in turn typically leads us toward microservices totally encapsulating their own data storage and retrieval mechanisms. This leads us to the conclusion that, when migrating toward a microservices architecture, we need to split our monolith’s database if we want to get the best out of the transition. However, splitting a database is far from a simple endeavour. We need to consider issues of data synchronization during transition, logical versus physical schema decomposition, transactional integrity, joins, latency and more.

The shared database

In this pattern, a single data source is shared across all the services. This can be appropriate when a service is directly exposing a database as a defined endpoint that is designed and managed to handle multiple consumers.

Database view

In this pattern, the data source is exposed as a single, read-only view for all consumers. It is typically suited for read-only applications.

Database wrapping service

In this pattern, we hide the database behind a service that acts as a thin wrapper, moving database dependencies to become service dependencies. This pattern works well when the underlying schema is just too hard to consider pulling apart. By placing an explicit wrapper around the schema and making it clear that the data can be accessed only through that schema, you can prevent the database from growing any further. It clearly delineates what is “yours” versus what is “someone else’s.”

Database-as-a-Service interface

This pattern aims to create a dedicated database designed to be exposed as a read-only endpoint, and to have this database populated when the data changes in the underlying database. It fits reporting use cases very well—these are situations where your clients might need to join across large amounts of data that a given service holds. This idea could be extended to then import this database’s data into a larger data warehouse, allowing for data from multiple services to be queried.

Aggregate exposing monolith

When the data you want to access is still “owned” by the database, this pattern works well to allow your new services the access they need. When extracting services, having the new service call back to the monolith to access the data it needs is likely little more work than directly accessing the database of the monolith—but, in the long term, it’s a much better idea. We can consider using a database view over this approach only if the monolith in question cannot be changed to expose these new endpoints.

Change data ownership

If your newly extracted service encapsulates the business logic that changes some data, that data should be under the new service’s control. The data should be moved from where it is over into the new service.

Synchronize data in application

In this pattern, the application itself would perform the synchronization between the two data sources. The idea is that, initially, the existing database would remain the source of truth—but, for a period, the application would ensure that data in the existing database and the new database were kept in sync. After this period, the new database would move to being the source of truth for the application, prior to the old database being retired. You could consider using this pattern when both your monolith and microservices are accessing the data.

There are two points to consider before you embark on a microservices journey. First, give yourself enough space and gather the right information to make rational decisions. Don’t just copy others—think instead about your problem and your context, assess the options and then move forward, while being open to change if that’s needed later. Second, remember that the key is incremental adoption of microservices and many of the associated technologies and practices. No two microservices architectures are alike—while there are lessons to be learned from the work done by others, you need to take the time to find an approach that works well in your context. By breaking the journey into manageable steps, you give yourself the best chance of success, as you can adapt your approach as you go. Microservices are not for everyone. But, hopefully, after reading this blog series, you’ll have a better sense of whether they are right for you, as well as some ideas about how to get started on the journey.

OpenText™ Professional Services has vast experience in helping customers transition to microservices-based solutions. Contact us to get support toward your microservices transformation journey.

Author: Venkatesh Sundararajan, is a Senior Consultant, Professional Services – Center of Excellence, with more than eight years of working experience in building highly scalable, end-to-end customer solutions involving OpenText products globally.

OpenText Professional Services

OpenText Professional Services offers the largest pool of OpenText EIM product and solution certified experts in the world. They bring market-leading field experience, knowledge, and innovative creativity from experience spanning more than 25 years and over 40,000 engagements.

Related Posts

Back to top button