Application Modernization

Monolith to Microservices Migration — Strangler Fig Pattern

Decompose monolithic applications into independently deployable microservices using the strangler fig pattern — incremental migration that keeps the monolith running throughout, with zero big-bang risk.

Is This You?

Signs Your Monolith Is Holding You Back

Monolithic architecture isn't inherently wrong — but there are clear signals that the architecture is now a constraint on how fast you can build, deploy, and scale.

A Single Bug Fix Requires Redeploying Everything

Change one module, redeploy the whole application. That means a full regression test cycle, a coordinated release window, and the risk that your fix breaks something in a completely unrelated area of the codebase.

Teams Step on Each Other's Code Every Day

Merge conflicts aren't occasional — they're a daily battle. Multiple teams working on the same codebase creates coordination overhead that slows everyone down. Feature development waits for other teams to finish and merge.

Scaling Means Scaling Everything — Even What Doesn't Need It

Your checkout service is under load, but you have to scale the entire application — including the reporting module, the admin panel, and the notification service — because there's no way to isolate the one thing that needs more capacity.

You Can't Adopt New Technology for One Feature

A new team wants to use Python ML inference. Another needs a real-time websocket layer. But the monolith's tech stack is locked. Upgrading one component means upgrading — or rigorously testing — everything that touches it.

Release Cycles Are Measured in Months

Testing the whole system before release takes weeks. Coordinating deployments across teams takes more. The business waits months for features that would take days to build — and the competitive window closes.

Our Strategy

The Strangler Fig Approach

We decompose monoliths incrementally — one service at a time — so the monolith keeps running throughout and customers experience no disruption.

01

Identify Boundaries

Map bounded contexts using Domain-Driven Design. Identify the natural seams in your monolith — where domains are loosely coupled and extraction would be minimally disruptive.

Event storming workshops, aggregate identification, dependency graph analysis.

02

Extract First Service

Extract the highest-value, lowest-risk service first. Route traffic through an API gateway. The monolith keeps running — customers see nothing change.

API gateway setup, strangler proxy routing, parallel data access.

03

Progressive Extraction

Service by service, the monolith shrinks. Each extracted service is independently deployable, independently testable, and independently scalable.

Per-service CI/CD, data decomposition, event-driven sync between services.

04

Monolith Residue

The remaining monolith handles only core shared concerns — or is fully decomposed. Most teams stop here. A small, well-understood monolith core is not a problem.

Decommission decision, shared service extraction, operational stabilisation.

Not every monolith needs full decomposition. Sometimes extracting 3–4 high-value services is enough to solve the scaling and delivery problems you have. We'll tell you when to stop.

Honest Advice

When Microservices Are NOT the Answer

Microservices are powerful — but they come with real operational complexity. Here are the situations where we'd tell you to hold off:

  • Your engineering team is fewer than 10 engineers — the operational overhead of microservices (separate deployments, distributed tracing, service mesh) may outweigh the benefits.

  • The monolith is well-structured with clean module boundaries and good test coverage — refactoring the internals may be all you need.

  • You don't have CI/CD, containerisation, or observability in place — get the DevOps foundation right before decomposing.

  • The performance problem is in the database, not the application — decomposing won't fix a slow query or a missing index.

  • You're doing this because 'microservices are the right architecture' rather than to solve a specific scaling or team delivery problem.

We'll be honest about whether decomposition is right for your situation. Sometimes the best advice is "not yet" — and that's advice worth paying for.

What We Deliver

Microservices Decomposition Services

Four specialised services that cover every phase of monolith-to-microservices migration.

How We Work

Our Decomposition Process

Six phases from initial domain analysis through operationally stable independently deployable services.

01

Domain Analysis

Event storming workshops, bounded context mapping, aggregate identification using Domain-Driven Design.

02

Dependency Mapping

Trace runtime dependencies, shared database tables, and integration points across the monolith.

03

Decomposition Strategy

Prioritise which services to extract first — high business value + loose coupling = first candidates.

04

Service Extraction

Strangler fig implementation — API gateway routing, parallel running, data migration per service.

05

Data Decomposition

Split the shared database into per-service databases with event-driven sync and eventual consistency.

06

Operationalise

Per-service CI/CD pipelines, distributed tracing, service mesh, SLOs per service.

Technology

Microservices Technology Stack

We use cloud-agnostic, battle-tested tooling across all decomposition projects — so you're never dependent on a single vendor.

Backend Services

.NET 8 Spring Boot Node.js Go

API Gateway

Kong AWS API Gateway Azure APIM Envoy Proxy

Service Mesh

Istio Linkerd Consul Connect

Event Streaming

Apache Kafka RabbitMQ Amazon EventBridge

Containers

Docker Kubernetes Helm

Observability

Grafana Jaeger (tracing) OpenTelemetry

AI-Assisted Decomposition

We use AI-powered code analysis to accelerate boundary identification — automated dependency graph generation, AI-assisted service boundary suggestions based on code coupling analysis, and intelligent test generation for extracted services that previously had no test coverage.

Results

Decomposition Success Stories

Real monolith decompositions — measured in release velocity, performance improvement, and zero customer-facing downtime.

E-commerce

500K-line PHP monolith → 12 microservices, zero downtime

Decomposed a 500,000-line PHP monolith into 12 microservices using the strangler fig pattern, achieving independent deployments and 4× faster release velocity — with zero customer-facing downtime during the 8-month migration.

Read Case Study
SaaS

Billing, auth, and notification extracted from .NET monolith

Extracted billing, authentication, and notification services from a .NET monolith into independently scalable .NET 8 microservices, reducing billing processing time by 60% and enabling teams to deploy independently for the first time.

Read Case Study
Logistics

Java EE monolith → Spring Boot microservices, real-time tracking

Migrated a legacy Java EE monolith to Spring Boot microservices, enabling real-time fleet tracking that was architecturally impossible with the monolith's batch processing model — delivered in 10-week strangler fig migration.

Read Case Study
Why Kansoft

Why Clients Choose Us for Decomposition

Strangler Fig Specialists

Incremental migration that keeps the monolith running throughout — not a risky big-bang rewrite that takes two years and delivers nothing.

DDD Practitioners First

We start with domain modelling and event storming, not technology choices. Getting the service boundaries right upfront prevents expensive rework later.

Honest Decomposition Assessment

We'll tell you if microservices aren't right for your situation — even if that's not the answer you came in wanting to hear.

End-to-End Delivery

From domain analysis through Kubernetes-deployed, observable microservices. We don't hand off the hard parts to your team.

Data Decomposition Expertise

Splitting the shared database is the hardest part of going microservices. We've done it — with per-service databases, event-driven sync, and validated cutover.

FAQ

Frequently Asked Questions

Common questions about monolith-to-microservices migration — answered directly.

What is the strangler fig pattern for microservices migration?
The strangler fig pattern is an approach to incrementally extracting services from a monolith without rewriting it all at once. You introduce an API gateway in front of the monolith, then route specific request paths to newly extracted services while everything else continues going to the monolith. Service by service, the monolith handles fewer requests. Eventually, the monolith is either fully replaced or reduced to a small residual core. The pattern is named after the strangler fig vine, which grows around a tree until the tree is no longer needed.
How long does it take to decompose a monolith into microservices?
Timelines vary significantly based on the monolith's size, database coupling, and team capacity. A focused extraction of 3-4 services typically takes 3-6 months. Full decomposition of a large monolith into 10-20+ services typically runs 9-18 months across multiple waves. We start with a domain analysis to identify which services to extract first and produce a realistic wave plan before committing to any timeline.
How do you handle the shared database when splitting into services?
Database decomposition is the hardest part of microservices migration. We use several approaches depending on the situation: shared database as a transitional state (services share the DB but own their tables), database-per-service with synchronous API calls, and event-driven eventual consistency for loose coupling. We migrate databases in phases, never splitting a database across a service boundary without first eliminating the cross-service database joins in application code.
Do we need Kubernetes to run microservices?
No. Kubernetes is useful for managing many independently deployed services at scale, but it adds significant operational complexity. For teams extracting 3-5 services, managed container services (AWS ECS, Azure Container Apps) are often a better starting point. We size the operational platform to the number of services — we won't push you toward Kubernetes if simpler options meet your needs.
How do you ensure data consistency across microservices?
We use a combination of strategies depending on the consistency requirement. For operations requiring strong consistency (financial transactions, inventory), we use the saga pattern with compensating transactions. For reporting and read-heavy workloads, we use event-driven eventual consistency with event sourcing where appropriate. We're explicit about where eventual consistency is acceptable and where it's not — not everything can tolerate it.
What if we only want to extract a few services, not fully decompose?
That's often the right answer. Partial decomposition — extracting 3-5 services that have the clearest business justification (highest load, fastest-changing, most team-conflict-causing) — delivers most of the benefit without the full complexity of a completely decomposed architecture. We'll recommend stopping points that balance benefit against operational overhead.

Ready to Break Free from Your Monolith?

Tell us your monolith's stack and scale — we'll map the right decomposition strategy.

Book a Free Call