Moving a monolith to a modern, flexible cloud environment

Modernising a Mission-Critical Legacy Application and Platform for the Cloud

NYK logo

Introduction

In a recent project for one of the world’s oldest and largest shipping companies, NYK RORO Europe, we transformed an aging, ‘all-in-one’ Java software system into a modern, flexible cloud environment. Previously, one of the company’s business applications (software) operated as a single, massive unit known as a monolith. This meant that if one small part of the system became overloaded (such as a complex calculation task) it would ‘hog’ all the computer’s resources, potentially slowing down or crashing the entire platform for every user. It was an inefficient way to work, as the company had to pay to keep the entire massive system running at full power 24/7 just to handle small spikes in demand.

By moving to a cloud-native ecosystem using Azure Kubernetes Service (AKS), we broke that monolith into smaller, independent pieces called microservices. This new setup allows the system to be highly efficient and cost-effective, as it can automatically scale up only the specific parts under pressure while keeping others small, ensuring the client only pays for the computing power they actually use. Furthermore, this modern architecture allows for continuous updates without downtime, meaning new features can be added without ever having to switch the system off. We achieved this by carefully wrapping the new technology around the original, proven business rules, delivering a faster, more reliable platform without the high risk of a total rewrite.

Lets dig into the detail…

Don’t have time right now? Check out TL;DR.


 

The Challenge: Breaking Free from the ‘Monolithic Bottleneck’

The original application ran on a traditional Java Web Application Server. While it served the business for years, it suffered from ‘fate-sharing’: the entire application shared a single Java Virtual Machine (JVM) and pool of hardware resources.

The Problem with Indiscriminate Scaling

In the legacy environment, if a single complex backend service, such as a rule engine processing component, experienced a spike in demand, it would ‘hog’ the CPU and exhaust the JVM heap memory. Because the application was a monolith, it wasn’t possible to scale just the bottlenecked feature; the entire application server had to be replicated across multiple virtual machines. This led to:

  • Resource Inefficiency: The client was paying for 100% of the codebase to be replicated when only 5% was under load.
  • The ‘Noisy Neighbour’ Effect: One runaway process could degrade the performance of every other feature in the system.

The Solution: Resource Isolation in Azure Kubernetes Service (AKS)

To overcome these problems, we decomposed the monolith into discrete microservices running under Spring Boot, each running in its own Docker container orchestrated by AKS.

Technical Precision & Cost Savings

  • Granular Sizing: We are now able to ‘right-size’ each service, where lightweight services are assigned minimal CPU/Memory, while high-compute services are allocated the exact resources they require.
  • Horizontal Pod Autoscaling (HPA): AKS monitors traffic in real-time, during peak events, it scales only the specific microservices that are under pressure.
  • The Financial Impact: By eliminating the need to provision for ‘peak load’ across the entire monolith 24/7, we were able to deliver significant cloud cost optimisation. The client now only pays for the compute power they are actively using.

Benefits of Modern AKS Microservices over Legacy Java Monolith

FeatureLegacy Java MonolithModern AKS Microservices
FeatureScaling GranularityLegacy Java MonolithVertical: Must scale the entire application, even for a single feature spike.Modern AKS MicroservicesHorizontal: Scales specific services (Pods) independently based on real-time load.
FeatureResource IsolationLegacy Java MonolithPoor: One heavy process (e.g., reporting) can crash the entire JVM/Server.Modern AKS MicroservicesStrong: Containers have strict CPU/Memory limits; failures are isolated.
FeatureCloud CostLegacy Java MonolithHigh: Requires 24/7 'peak load' provisioning for the whole app.Modern AKS MicroservicesOptimised: Uses autoscaling to pay only for active compute; scales to zero if idle.
FeatureDeploymentLegacy Java Monolith'Big Bang': Long maintenance windows; entire system must be restarted.Modern AKS MicroservicesContinuous: Update one service at a time with zero downtime (Rolling Updates).
FeatureTech FlexibilityLegacy Java MonolithLocked-in: Entire app must use the same Java version and libraries.Modern AKS MicroservicesFlexible: Each service can use the best version or tool for its specific job.

Data Evolution: Transitioning from WSDL to GraphQL

To modernise how the front end interacts with these services, we replaced the rigid, XML-based WSDL (SOAP) APIs with a GraphQL API layer.

Why GraphQL?

  • Efficient Data Retrieval: Traditional WSDL APIs often returned massive ‘blobs’ of data, much of which was irrelevant and ignored. With GraphQL, the React front end can request exactly the fields it needs, reducing network latency and payload size.
  • Unified Interface: GraphQL acts as a single gateway to multiple microservices, simplifying the complexity for the front-end developers while maintaining a strict, type-safe schema.

Functionality Comparison of Traditional WSDL and GraphQL

FeatureTraditional WSDL (SOAP)Modern GraphQL API
FeatureData FetchingTraditional WSDL (SOAP)Fixed: Returns a pre-defined XML blob. You often get more data than needed.Modern GraphQL APIDeclarative: The React front end asks for exactly the fields it needs.
FeatureRound TripsTraditional WSDL (SOAP)Multiple: Fetching related data often requires several distinct API calls.Modern GraphQL APISingle: Can batch requests for different resources into one network round trip.
FeatureSchema & TypingTraditional WSDL (SOAP)Strict (WSDL): Reliable but extremely difficult to change or version.Modern GraphQL APIStrongly Typed (SDL): Introspective schema that provides built-in documentation.
FeaturePayload FormatTraditional WSDL (SOAP)XML: Verbose and heavy, increasing bandwidth and parsing time.Modern GraphQL APIJSON: Lightweight and native to the React/JavaScript ecosystem.
FeatureVersioningTraditional WSDL (SOAP)Difficult: Usually requires creating new endpoints (v1, v2) for any change.Modern GraphQL APIEvolutionary: Add new fields without breaking existing clients; deprecate old ones.

User Experience: High-Velocity Development with React

The custom, legacy JavaScript front end was replaced with a modular React application.

  • Component Reuse: React allows us to build a library of UI components, ensuring a consistent look and feel while drastically speeding up the development of new features.
  • Responsive Performance: By offloading state management to the browser, the interface feels instantaneous, providing a modern ‘App-like’ experience for users.

Risk Mitigation: The ‘Strangler Fig’ Approach

The most critical aspect of this project was protecting the complex backend services built over many years.

  • The Strategy: Instead of a total rewrite, we ‘wrapped’ the existing, proven backend services in new API layers. We utilised a ‘Strangler Fig’ pattern and by routing traffic incrementally, we ensured that the mission-critical business logic remained stable while the underlying infrastructure evolved to a cloud-native AKS environment.
  • The Outcome: The client gained all the benefits of a modern cloud-native stack without the massive risk (and cost) of rewriting decades of validated business rules.

Conclusion: Ready for the Future

The result is a platform that is faster, more reliable, and ready to evolve alongside the business. By modernising the ‘shell’ while preserving the ‘core,’ we delivered a high-impact upgrade with minimal operational risk and lower operational cost.


 

TL;DR

Key Developments

  • Decomposition into Microservices: The original ‘all-in-one’ application was broken down into small, independent microservices using Java Spring Boot.
  • Adoption of Azure Kubernetes Service (AKS): These independent services are now managed by AKS, which uses Docker containers to ensure each part of the system has its own dedicated space.
  • Modernised Data Layer: The team replaced old, bulky data-request methods (WSDL/SOAP) with GraphQL, which allows the system to fetch only the specific information needed.
  • New User Interface: The old front end was replaced with React, a modern toolkit that uses reusable components to speed up the development of new features.
  • The ‘Strangler Fig’ Strategy: Rather than a risky total rewrite, the team ‘wrapped’ the original, proven business rules in new technology, allowing for a gradual and safe transition.

Key Benefits

  • Significant Cost Savings: By using autoscaling, the client no longer pays for 100% of the system to run at full power 24/7; they only pay for the computing power they are actively using.
  • Continuous Updates with Zero Downtime: The new architecture allows for ‘Rolling Updates,’ meaning individual features can be updated without ever having to switch the entire system off.
  • Enhanced Reliability: Because services are now isolated, a single heavy process (such as a complex report) can no longer ‘hog’ resources and crash the entire platform.
  • Improved Speed and Efficiency: GraphQL reduces ‘data waste’ by delivering lightweight packages of information, resulting in faster performance and lower bandwidth usage.
  • Future-Proof Flexibility: The system is no longer ‘locked-in’ to one version of Java; each part of the software can now use the best and newest tools for its specific job.

Do you feel stuck with legacy monolith systems? Are you ready to transition to a modern, flexible cloud environment?

+44 (0)114 399 2820

info@deeperthanblue.com

Get in touch

More about cloud solutions and Kubernetes

Blog, Cloud - 19 November 2025

Kubernetes Done Right: Part 1 Technical and Operational Pitfalls

Kubernetes offers immense power for deploying and scaling containerised applications, but it’s easy to get wrong. This post explores common technical and operational mistakes that lead to fragile, insecure, and inefficient environments. Key pitfalls include outdated deployments, fixed resource allocation, inadequate high availability and disaster recovery planning, unoptimised nodes, and bypassing native Kubernetes services. Each issue is paired with practical solutions to build a robust, secure, and scalable foundation. The message is clear: success with Kubernetes starts with avoiding these preventable mistakes.

Read More
Blog, Cloud - 26 November 2025

Kubernetes Done Right: Part 2 The Costs of Doing it Wrong

Even a technically sound Kubernetes deployment can become a financial burden if not managed strategically. This post examines the hidden costs of Kubernetes—from idle resources and misconfigured containers to expensive manual scaling and specialist expertise. It compares on-premise versus cloud-managed models, highlighting how dynamic scaling and automation can reduce total cost of ownership. By optimising resource allocation and implementing autoscaling tools like KEDA, organisations can transform Kubernetes from a cost centre into a competitive advantage. Financial sustainability is as critical as technical excellence.

Read More
Blog, Cloud - 17 August 2025

How Cloud Platforms Outperform Traditional VM Solutions

Discover why cloud computing outperforms virtual machines with better scalability, cost savings, security, and global accessibility for modern businesses.

Read More