Architecture7 min read

From Monolith to Microservices: A Practical Guide

Obinna Agim-

From Monolith to Microservices: A Practical Guide

Every engineering leader will eventually face the monolith question: "Should we break this apart?" Having led multiple monolith-to-microservices migrations in fintech environments — where downtime means real financial impact — I've developed a practical framework for making this transition successfully.

When NOT to Migrate

Let me start with the contrarian view: most teams shouldn't migrate to microservices.

Microservices introduce distributed systems complexity. If your team isn't already good at:

  • Writing automated tests
  • Operating production systems
  • Monitoring and observability
  • Continuous deployment

...then microservices will make your problems worse, not better.

Valid Reasons to Migrate

  • Teams are stepping on each other's code changes
  • Deployment of one feature blocks another team's release
  • Different parts of the system need different scaling characteristics
  • Regulatory requirements demand isolation between services

Invalid Reasons to Migrate

  • "Netflix does it"
  • "Microservices are modern"
  • "Our architecture is old"
  • "We want to use Kubernetes"

The Strangler Fig Pattern

The most reliable migration strategy is the Strangler Fig pattern. Instead of rewriting everything at once (Big Bang — which almost always fails), you gradually replace pieces of the monolith.

Phase 1: Identify Boundaries
┌─────────────────────────────┐
│         MONOLITH            │
│  ┌─────┐ ┌─────┐ ┌─────┐  │
│  │User │ │Pay- │ │Noti-│  │
│  │Mgmt │ │ment │ │fica-│  │
│  │     │ │     │ │tion │  │
│  └─────┘ └─────┘ └─────┘  │
└─────────────────────────────┘

Phase 2: Extract First Service
┌─────────────────────┐  ┌──────────┐
│      MONOLITH       │  │Notifica- │
│  ┌─────┐ ┌─────┐   │←→│tion Svc  │
│  │User │ │Pay- │   │  │          │
│  │Mgmt │ │ment │   │  └──────────┘
│  └─────┘ └─────┘   │
└─────────────────────┘

Phase 3: Continue Extraction
┌───────────┐ ┌──────────┐ ┌──────────┐
│ MONOLITH  │ │Notifica- │ │ Payment  │
│ ┌─────┐   │ │tion Svc  │ │ Service  │
│ │User │   │ │          │ │          │
│ │Mgmt │   │ └──────────┘ └──────────┘
│ └─────┘   │
└───────────┘

Phase 4: Complete
┌──────────┐ ┌──────────┐ ┌──────────┐
│  User    │ │Notifica- │ │ Payment  │
│  Service │ │tion Svc  │ │ Service  │
│          │ │          │ │          │
└──────────┘ └──────────┘ └──────────┘

Step-by-Step Migration Framework

Step 1: Map Your Domain Boundaries

Before touching any code, map your domain using Domain-Driven Design (DDD) concepts:

// Identify bounded contexts in your monolith
const boundedContexts = {
  userManagement: {
    entities: ["User", "Role", "Permission", "Session"],
    events: ["UserRegistered", "UserDeactivated", "RoleAssigned"],
    externalDeps: ["EmailService", "AuditLog"],
  },
  payments: {
    entities: ["Transaction", "Account", "Ledger", "Settlement"],
    events: ["PaymentInitiated", "PaymentCompleted", "RefundProcessed"],
    externalDeps: ["BankGateway", "FraudDetection", "NotificationService"],
  },
  notifications: {
    entities: ["Template", "Channel", "Preference", "DeliveryLog"],
    events: ["NotificationSent", "NotificationFailed", "PreferenceUpdated"],
    externalDeps: ["SMSProvider", "EmailProvider", "PushProvider"],
  },
};

Step 2: Establish the Integration Layer

Before extracting any service, set up the communication infrastructure:

// Event bus for async communication between services
interface DomainEvent {
  eventId: string;
  eventType: string;
  aggregateId: string;
  timestamp: Date;
  payload: Record<string, unknown>;
  metadata: {
    correlationId: string;
    causationId: string;
    version: number;
  };
}

// Start with a simple event bus, evolve to Kafka/RabbitMQ later
class EventBus {
  async publish(event: DomainEvent): Promise<void> {
    // Phase 1: In-process event dispatch
    // Phase 2: Message queue (RabbitMQ/SQS)
    // Phase 3: Event streaming (Kafka)
  }

  async subscribe(
    eventType: string,
    handler: (event: DomainEvent) => Promise<void>
  ): Promise<void> {
    // Register event handlers
  }
}

Step 3: Extract the Least Coupled Service First

Start with the service that has the fewest dependencies on the monolith. In our case, it was the Notification service:

Why Notifications first?

  • Read-only relationship with other domains (receives events, doesn't produce data others need)
  • Clear API boundary (send notification)
  • Easy to test in isolation
  • Low risk if something goes wrong (delayed notification vs. lost payment)

Step 4: Implement the Anti-Corruption Layer

The anti-corruption layer (ACL) translates between the monolith's data model and the new service's model:

// Anti-corruption layer in the new Notification service
class MonolithAdapter {
  // Translate monolith user model to notification service model
  toNotificationRecipient(monolithUser: MonolithUserDTO): Recipient {
    return {
      id: monolithUser.userId,
      email: monolithUser.emailAddress, // Different field name in monolith
      phone: monolithUser.mobileNumber,
      preferences: this.mapPreferences(monolithUser.notifSettings),
    };
  }

  private mapPreferences(settings: unknown): NotificationPreferences {
    // Handle the messy monolith data format
    // This is where you contain the complexity
  }
}

Step 5: Database Decomposition

This is the hardest part. Shared databases are the #1 source of coupling in monoliths.

Strategy: Database-per-service with eventual consistency

BEFORE (Shared Database):
┌─────────┐
│Monolith │ → Single PostgreSQL
└─────────┘

AFTER (Database per Service):
┌─────────┐    ┌──────────┐    ┌──────────┐
│  User   │    │ Payment  │    │ Notif.   │
│ Service │    │ Service  │    │ Service  │
└────┬────┘    └────┬─────┘    └────┬─────┘
     │              │               │
┌────┴────┐   ┌────┴─────┐   ┌────┴─────┐
│PostgreSQL│   │PostgreSQL│   │ MongoDB  │
│  Users   │   │ Payments │   │  Notifs  │
└─────────┘   └──────────┘   └──────────┘

Rules for database decomposition:

  1. No service reads another service's database directly
  2. Data duplication is acceptable — services can maintain their own read models
  3. Use events for data synchronization between services
  4. Accept eventual consistency (this is the hardest cultural shift)

Fintech-Specific Considerations

Working in fintech adds additional constraints:

Transaction Integrity

Financial transactions must be exact. We used the Saga pattern for distributed transactions:

// Saga: Transfer funds between accounts
const transferFundsSaga = {
  steps: [
    {
      action: "debit-source-account",
      compensation: "credit-source-account", // Rollback
    },
    {
      action: "credit-destination-account",
      compensation: "debit-destination-account", // Rollback
    },
    {
      action: "record-transaction",
      compensation: "void-transaction", // Rollback
    },
    {
      action: "send-confirmation",
      compensation: "send-failure-notification", // Rollback
    },
  ],
};

Audit Trail

Every operation must be traceable. We implemented event sourcing for critical financial services:

// Every state change is recorded as an immutable event
interface AccountEvent {
  eventId: string;
  accountId: string;
  type: "CREDIT" | "DEBIT" | "HOLD" | "RELEASE";
  amount: number;
  currency: string;
  timestamp: Date;
  operatorId: string;
  reason: string;
}

// Current balance is derived from replaying events
function getBalance(events: AccountEvent[]): number {
  return events.reduce((balance, event) => {
    switch (event.type) {
      case "CREDIT": return balance + event.amount;
      case "DEBIT": return balance - event.amount;
      default: return balance;
    }
  }, 0);
}

Regulatory Compliance

Different services may fall under different regulatory requirements. Microservices actually help here — you can apply strict compliance controls only to the services that need them.

Migration Timeline

A realistic timeline for a medium-sized fintech monolith:

PhaseDurationActivities
Discovery2-4 weeksDomain mapping, dependency analysis, team alignment
Infrastructure4-6 weeksService mesh, CI/CD, monitoring, event bus setup
First Service6-8 weeksExtract, test, deploy, stabilize
Subsequent Services4-6 weeks eachGets faster as patterns are established
Decommission Monolith4-8 weeksFinal data migration, cutover, cleanup

Total: 6-12 months for a typical migration. Anyone promising faster is either working with a tiny codebase or cutting corners.

Key Takeaways

  1. Don't migrate unless you have clear technical or organizational reasons
  2. Start with the strangler fig pattern — never do a big bang rewrite
  3. Extract the least coupled service first to build confidence
  4. Invest in observability before you need it — distributed systems are hard to debug
  5. Accept eventual consistency — it's the price of scalability
  6. Database decomposition is the hardest part — plan for it from day one
  7. Culture matters more than technology — microservices require teams that can own and operate services independently

The goal isn't microservices. The goal is the ability to deliver value independently and scale what needs scaling. Sometimes that means microservices. Sometimes it means a well-structured monolith.


Planning a migration from monolith to microservices? Let's talk architecture — I've helped multiple organizations navigate this transition successfully.

#microservices#architecture#fintech#distributed-systems#migration
OA

Obinna Agim

Technology leader with 11+ years building scalable systems. Fractional CTO and system architect helping companies scale their engineering organizations.

Get in touch