Alternative Text Tony Moore | 09 April 2025 |

Migrating from WebSphere Application Server to Liberty in Containers

WebSphere Liberty logo

A Comprehensive Guide to Modernising Your J2EE Applications

Introduction

In today’s rapidly evolving technology landscape, organisations face increasing pressure to modernise their applications to remain competitive, agile, and cost-effective. For enterprises running monolithic Java EE (J2EE) applications on traditional IBM WebSphere Application Server (WAS), this modernisation journey often leads to containerisation—a transformative approach that promises greater scalability, flexibility, and operational efficiency.

This comprehensive guide explores the why, what, and how of migrating from traditional WebSphere Application Server (WAS Network Deployment, WAS Base and WAS Express) to WebSphere Liberty in a containerised environment. Whether you’re a seasoned IT professional or new to containerisation, this guide will equip you with the knowledge and insights needed to successfully transition your applications to a modern, cloud-native architecture.

Why Move a Monolithic J2EE Application to a Containerised Platform?

Moving a traditional application from IBM WebSphere Application Server running on virtual machines (VMs) to a containerised environment in Kubernetes offers numerous compelling benefits across several dimensions:

Improved Resource Utilisation and Cost Efficiency

  • Smaller Footprint with Containers: Containers are significantly lighter than VMs because they share the host operating system kernel while including only the application and its dependencies. This dramatically reduces the overhead of running full VMs for each instance of your application and improves resource utilisation.
  • Optimised for WebSphere Liberty: Transitioning to WebSphere Liberty, which has a modular architecture, further reduces resource usage by only loading necessary features, lowering memory and CPU requirements.
  • Cost Savings in the Cloud: In cloud environments where you pay for resource usage (CPU, memory, etc.), containers allow you to run more application instances on the same infrastructure, reducing costs compared to provisioning separate VMs for each WAS instance.

 

Enhanced Scalability and Elasticity

  • Dynamic Scaling: Kubernetes enables rapid, automated scaling of your application based on demand. You can configure Horizontal Pod Autoscaling (HPA) to add or remove container instances (pods) in response to metrics like CPU usage or request volume—something that’s far more cumbersome with VMs.
  • Microservices Enablement: If you refactor your application during migration (breaking a monolith into smaller services), Kubernetes is ideal for managing microservices. Each service can run in its own container, enabling resource allocation and scaling by application function, with Kubernetes handling service discovery, load balancing, and scaling. It optimises resource efficiency and enhances performance under varying loads, overcoming the rigid, single-instance limitations inherent in traditional WAS deployments.
  • Faster Startup Times: WebSphere Liberty’s fast startup time (often under 5 seconds) allows Kubernetes to scale applications more quickly than traditional WAS, which can take minutes to start. This is critical for handling sudden traffic spikes.
  • Cluster-Wide Resource Sharing: Kubernetes distributes workloads across a cluster of nodes, ensuring efficient use of resources and avoiding the over-provisioning often required with VMs to handle peak loads.

 

Simplified Deployment and Management

  • Declarative Configuration: Kubernetes uses declarative manifests (YAML files) to define the desired state of your application, including replicas, resource limits, and networking. This approach simplifies management and version-control compared to manually configuring WAS clusters across VMs. (Depending on who you ask, YAML either stands for Yet Another Markup Language or YAML Ain’t Markup Language, proving that techies do indeed have a sense of humour!)
  • Centralised Management: Kubernetes provides a unified platform (via tools like kubectl, Helm, or dashboards) to manage your application, networking, storage, and monitoring, reducing the complexity of managing multiple VMs with separate WAS instances.
  • Rolling Updates and Zero Downtime: Kubernetes supports rolling updates, allowing you to deploy new versions of your application without downtime—more seamless than updating a traditional WAS cluster on VMs, where you might need to take nodes offline manually.

 

Increased Agility and Faster Development Cycles

  • DevOps and CI/CD Integration: Kubernetes integrates well with CI/CD pipelines (Jenkins, GitLab CI, etc.), enabling faster and more frequent deployments. Containers allow you to package your application and its dependencies consistently, reducing “works on my machine” issues.
  • Portability Across Environments: Containers encapsulate your application and its runtime (Liberty), making it portable across development, testing, and production environments—or even across different cloud providers.

 

Improved Resilience and High Availability

  • Self-Healing Capabilities: Kubernetes automatically restarts failed containers, reschedules pods to healthy nodes, and ensures the desired number of replicas are running. This reduces downtime compared to a VM-based setup.
  • Load Balancing and Traffic Management: Kubernetes provides built-in load balancing across pods, ensuring even distribution of traffic—more dynamic than traditional WAS clustering, which relies on static configurations.
  • Multi-Node Redundancy: In a Kubernetes cluster, your application can run across multiple nodes (physical or virtual), reducing the risk of a single point of failure.

 

Better Alignment with Cloud-Native Practices

  • Stateless Design: Kubernetes encourages stateless application design, which aligns well with WebSphere Liberty’s lightweight, stateless runtime.
  • Support for Modern Standards: Liberty supports modern standards like Jakarta EE or MicroProfile, which are better suited for cloud-native environments.
  • Integration with Cloud Services: Kubernetes makes it easier to integrate your application with cloud-native services (managed databases, monitoring tools, serverless functions, etc.) , enhancing functionality and reducing operational overhead.

 

Simplified Monitoring and Logging

  • Centralised Observability: Kubernetes integrates with modern monitoring and logging tools (Prometheus, Grafana, Fluentd, Elasticsearch, etc.), providing centralised visibility into your application’s health, performance, and logs.
  • Health Checks: Kubernetes supports liveness and readiness probes to monitor the health of your application containers.

 

Reduced Operational Overhead

  • Infrastructure as Code (IaC): Kubernetes allows you to manage your infrastructure as code, using tools like Helm charts or Kubernetes manifests—reducing manual configuration tasks.
  • No Need for Node Agents: With Liberty, you eliminate the need for WAS ND’s Node Agents and Deployment Manager, simplifying the architecture.
  • Easier Patching and Upgrades: Containers allow you to update your application or runtime by building a new image and redeploying it, with Kubernetes handling the rollout.

 

Future-Proofing and Vendor Flexibility

  • Avoid Vendor Lock-In: Containers and Kubernetes are portable across cloud providers (AWS, Azure, Google Cloud) and on-premises environments.
  • Adoption of Modern Tools: Kubernetes is the de facto standard for container orchestration, positioning your application to take advantage of a rich ecosystem of tools and best practices.
  • Path to Microservices: If you plan to modernise your application further in the future, Kubernetes provides the ideal foundation for microservices.

 

Read more about our containerisation services at DeeperThanBlue.

What is WebSphere Liberty and Why is it Recommended for Containerisation?

WebSphere Liberty is IBM’s modern, lightweight modular application server designed for cloud native deployments. It is ideal for containerisation due to its design and features, which align closely with the principles and requirements of containerised environments like Docker and Kubernetes.

Liberty is purpose-built to meet these needs, making it a better fit than traditional runtimes like WebSphere Application Server ND, Base and Express.

WebSphere Liberty can be purchased as a standalone product and deployed in containers, on-premise or any cloud provider. Alternatively, it also is provided as part of the IBM Cloud Pak for Applications bundle, alongside traditional WAS and Red Hat OpenShift.

What Makes Liberty Ideal for Containerised Environments?

  • Faster Startup and Scaling: Liberty’s quick startup time (seconds vs. minutes for traditional WAS) aligns better with Kubernetes’ dynamic scaling needs.
  • Smaller Images: Liberty-based container images are significantly smaller than traditional WAS images, reducing storage and network overhead.
  • Simplified Clustering: Liberty doesn’t rely on traditional WAS clustering (Node Agents, Deployment Manager, etc.) used in WAS ND, making it easier to manage in Kubernetes, where clustering is handled at the platform level.

 

Comparison of WebSphere Liberty vs. Traditional WebSphere Application Server

Aspect WebSphere Liberty Traditional WebSphere Application Server
Architecture  Lightweight, modular, and kernel-based; only loads required features  Monolithic; loads a full set of features regardless of need 
Resource Footprint  Small memory and CPU usage (e.g., ~100 MB for a basic app)  Larger footprint (e.g., 1-2 GB or more per instance) 
Startup Time  Very fast (under 5 seconds)  Slower (often minutes) 
Configuration  Single server.xml file, human-readable, dynamically updatable  Complex, with multiple XML files and an admin console 
Cloud-Native Support  Designed for containers, Kubernetes, and microservices; stateless by default  Can run in containers but not optimised; designed for traditional clustering 
Scalability  Scales dynamically in Kubernetes; stateless design suits horizontal scaling  Scales via traditional clustering in WAS ND (e.g., Dynamic Clusters), less agile in cloud 
Deployment Speed  Fast deployment due to lightweight runtime and quick startup  Slower deployment due to heavier runtime and startup delays 
Programming Models  Supports Jakarta EE, MicroProfile, Spring Boot, and traditional Java EE  Supports Java EE and traditional WebSphere APIs; less focus on modern standards 
Clustering  No traditional clustering; relies on Kubernetes or external tools for scaling  Advanced clustering with Deployment Manager, Node Agents, and workload management under WAS ND 
Use Case  Best for cloud-native apps, microservices, and containerised deployments  Best for legacy, stateful, or complex enterprise apps requiring traditional clustering 
Cost  Lower operational cost due to reduced resource usage; licensing varies (Open Liberty is free)  Higher resource usage increases costs; licensing typically per Virtual Processor Core (VPC) 
Migration  Easier to migrate to for modern apps; some legacy features may need refactoring  Harder to modernise; often used for existing, complex deployments 

Learn more about our WebSphere expertise at DeeperThanBlue

What Do I Lose by Moving from Traditional WebSphere Application Server to WebSphere Liberty and How Do I Mitigate These?

When migrating from traditional WebSphere Application Server to WebSphere Liberty in a containerised environment, you’ll need to consider certain trade-offs. Some traditional WebSphere features—particularly those related to clustering, distributed transactions in WAS ND, and legacy Java EE capabilities—won’t be available in the same form in Liberty.

However, it’s important to note that many of these features become less relevant in a cloud-native, containerised setup where Kubernetes provides alternative mechanisms for scaling, load balancing, and high availability. By embracing Liberty’s stateless, lightweight design and leveraging Kubernetes’ capabilities, you can effectively mitigate these losses while gaining the benefits of a modern, container-friendly runtime.

Below is a list of the features that will be lost and the appropriate mitigation strategies.

Feature Impact in Liberty Mitigation Strategy
Traditional Clustering/WLM  Not supported; no Deployment Manager or WLM  Use Kubernetes for scaling and load balancing (e.g., HPA, Services) 
Global Transaction Propagation  Not supported across JVMs  Refactor to use local transactions or compensating transactions 
EJB 2.x Entity Beans (CMP)  Not supported  Migrate to JPA or another persistence framework 
Service Integration Bus (SIB)  Not supported  Use an external messaging system (e.g., IBM MQ, Kafka) 
WebSphere Batch  Not fully supported  Use Jakarta Batch or an external batch solution (e.g., Spring Batch) 
Dynamic Cache Service  Not supported  Use an external cache like Redis 
Session Replication  Not supported across JVMs  Externalize sessions to Redis or a database; use sticky sessions in Kubernetes 
Administrative Console/wsadmin  Limited Admin Center; no wsadmin scripting  Manage via server.xml, REST APIs, or Kubernetes tools (e.g., kubectl) 
Security Domain Configuration  Not supported  Configure security directly in server.xml with modern standards (e.g., OAuth) 
Cross-Cell Topologies  Not supported  Use Kubernetes Service Mesh or external load balancers for multi-deployment setups 

 

How do I go about migrating from WebSphere Application Server to Liberty in Containers?

Can I Just Use the Traditional WebSphere Application Server Images in Containers?

It is technically possible to run all editions of traditional IBM WebSphere Application Server 9.5 including WAS ND in Kubernetes using container images provided by IBM. However, it’s not the most practical or efficient approach for a cloud-native environment due to its larger resource footprint, longer startup time, clustering complexity, configuration overhead, and lack of optimisation for cloud-native patterns.

This approach would present several challenges:

  • Resource Inefficiency: Traditional WAS’s larger memory and CPU footprint compared to Liberty makes it less efficient for containerised environments where resource optimisation is paramount.
  • Slow Startup Performance: Traditional WAS’s lengthy startup time (often minutes) can significantly impede Kubernetes’ scaling operations, such as pod restarts or auto-scaling.
  • Clustering Complexity: WAS ND’s clustering model (Dynamic Clusters, Node Agents, etc.) was designed for traditional VM-based environments and doesn’t align well with Kubernetes’ architecture.
  • Configuration Overhead: WAS ND’s complex configuration approach is more cumbersome to manage in containerised environments compared to Liberty’s single server.xml file.
  • Limited Cloud-Native Fit: Traditional WAS wasn’t optimised for cloud-native patterns like microservices, rapid scaling, or stateless deployments that are common in Kubernetes environments.

Discover how we help clients with containerisation at DeeperThanBlue

How to Undertake the Migration and How IBM Tools Can Help

Migrating from traditional WebSphere Application Server to WebSphere Liberty in a containerised environment requires careful planning and execution. Fortunately, IBM provides several tools to assist with this process and DeeperThanBlue has the experience to lead you through the process. Here’s a high-level overview of the migration journey and how these tools can support you at each stage:

1. Assess Your Application

The first step in any migration is to thoroughly understand your current application and its dependencies. This assessment will help you identify potential challenges and develop a migration strategy.

  • Use IBM Transformation Advisor: This free tool analyses your WAS ND 9.5 applications to identify migration complexity, flag deprecated features, and provide recommendations for running on Liberty.
  • Identify WAS ND-Specific Features: Check for WAS ND-specific features your application uses (EJB clustering, global transactions, etc.) that may not be supported in Liberty or may require refactoring.

 

2. Update Your Application

Based on your assessment, you’ll need to update your application code and configuration to make it compatible with Liberty and suitable for containerisation.

  • Update Deprecated APIs: Move from Java EE to Jakarta EE if needed and address any deprecated APIs.
  • Refactor traditional WAS-Specific Features: Modify or replace any WAS ND-specific features that aren’t supported in Liberty.
  • Adjust Configurations: Migrate from traditional WAS’s XML-based configuration to Liberty’s server.xml.
  • Leverage WatsonX Code Assistant: This AI-powered tool can significantly accelerate your code modernisation efforts by identifying issues and suggesting fixes.

 

3. Containerise the Application

With your application updated, the next step is to package it as a container image that can run in Kubernetes.

  • Build a Docker Image: Create a Docker image for your application using an official Liberty base image (e.g., icr.io/appcafe/websphere-liberty).
  • Generate Deployment Assets: Use IBM Transformation Advisor to generate Dockerfiles, Kubernetes manifests, and Helm charts.
  • Optimise for Kubernetes: Configure health checks, set resource limits/requests, and make other optimisations for Kubernetes. Watsonx Code Assistant can help with generating and refining these assets.

 

4. Deploy to Kubernetes

With your containerised application ready, you can now deploy it to your Kubernetes environment.

  • Use Helm or Kubernetes Manifests: Deploy your Liberty-based application using a Helm chart or Kubernetes manifests.
  • Configure Kubernetes Features: Set up auto-scaling, liveness/readiness probes, and service discovery to take advantage of Liberty’s cloud-native capabilities.

 

5. Test and Validate

Finally, thorough testing is essential to ensure your migrated application functions correctly in its new environment.

  • Functional Testing: Test your application in Kubernetes to ensure it behaves as expected.
  • Automation: Build a set of automated Tests so that this can be run as part of a regression pack for testing the application once future changes are made.
  • Performance and load Test: Create and execute performance and load tests to ensure performance is at least equal but ideally better than the existing application
  • Performance Monitoring: Monitor resource usage, scaling behaviour, and performance to confirm that Liberty is meeting your expectations.

 

Contact our experts at DeeperThanBlue for assistance with your WebSphere migration

Conclusion

Migrating from traditional WebSphere Application Server to WebSphere Liberty in a containerised environment represents a significant step in modernising your Java applications. While the journey involves challenges and trade-offs, the benefits—including improved resource efficiency, faster startup times, simplified management, and better alignment with cloud-native practices—make it a worthwhile investment for organisations looking to stay competitive in today’s rapidly evolving technology landscape.

By following a structured approach and leveraging IBM’s migration tools, you can navigate this transition successfully, positioning your applications for greater agility, scalability, and cost-effectiveness. Whether you’re just starting to explore containerisation or are already planning your migration, the path to modernisation is clear, and the rewards substantial.

WebSphere Liberty can be purchased as a standalone product and deployed in containers, on-premise or any cloud provider such Microsoft Azure, Google Cloud or Amazon AWS. Alternatively, it also is provided as part of the IBM Cloud Pak for Applications bundle, alongside traditional WAS and Red Hat OpenShift.” IBM Cloud Pak can be deployed on public cloud or on private cloud. If you need help with IBM software licensing, we have specialists who can assist you.

Ready to begin your WebSphere Liberty migration journey? Contact our team at DeeperThanBlue for expert guidance and support throughout your modernisation initiative.

 

Related Articles

These might interest you

Application Modernisation, Blog, Cloud, Integration - 26 September 2024

What are the benefits of moving IBM WebSphere workloads to Google Cloud?

As the digital landscape transforms at an unprecedented pace, companies are actively exploring innovative strategies to boost operational efficiency, minimise Read More
Blog, Cloud - 26 June 2024

Choosing the right platform for your IBM WebSphere cloud migration.

There is a wealth of benefits of moving your IBM installations from on-premises environments to cloud-based environments, including cost saving, Read More