Building Spring Cloud Microservices That Strangle Legacy Systems

Tuesday, August 30, 2016

It’s safe to say that any company who was writing software ten years ago—and is building microservices today—will need to integrate with legacy systems. In this article, we will explore building cloud-native microservices that still need to integrate with legacy systems. We’ll use practices from Martin Fowler’s Strangler Application to slowly strangle domain data away from a legacy system using microservices.

When building microservices, the general approach is to take existing monoliths and to decompose their components into new microservices. The most critical concerns in this method have much less to do with the application code and more to do with handling data. This article will focus on various methods of strangling a monolith’s ownership of domain data by transitioning the system of record over time.

Throughout this article, we’ll use a reference application built with Spring Boot and Spring Cloud. The example demonstrates techniques for integrating a cloud-native microservice architecture with legacy applications in an existing SOA.

Going Cloud Native

Many companies want to start taking advantage of the public cloud without having to migrate every line of business application at the same time. The reasons for this are numerous. The existing line of business applications can be thought of as the vital organs of a living organism. During the migration, think about the complex relationships between the existing components deployed to your infrastructure. Think about the dependencies of your applications and the connections between them. Think about every application that relies on a database or network file system. These are among the many considerations that will cause a migration to become an expensive and time-consuming project.

Hybrid cloud data center integration

The unfathomable complexity of migrating applications to the cloud can delay a decision from being made until the business deems necessary, which is usually triggered by a major event that results in a loss of revenue. This ticking time bomb will result in a lift-and-shift migration of applications. The problem with the lift and shift approach is that any technical debt that you had on-premises finds new life in the cloud environment. The problem being, the architectural and infrastructure issues that triggered the cloud migration still may not be fixed.

The approach I discuss in this article focuses on addressing the underlying symptoms of legacy systems that decrease system resiliency and lead to costly failures.

Legacy Systems

The chief benefit of building microservices is that the time it takes to deliver valuable features into production is significantly reduced. By creating microservices that are cloud-native, you can make use of on-demand virtual compute resources of the cloud to operate and scale your applications. Microservices provide agility while cloud-native architectures provide our distributed applications with additional performance, scalability, and resiliency characteristics.

To gain the benefit of agility with microservices, you may not be required to move some or even all of your existing applications to the cloud in a big bang migration. There are hybrid approaches that can enable you to begin transitioning your business logic to the cloud by creating cloud-native microservices that will strangle legacy systems that are still on-premises.

We can start by building microservices that are deployed to the public cloud and integrate with legacy systems that sit on-premises.

Data Ownership

The most common pain point that companies experience when building microservices is handling domain data. Your domain data is likely going to be trapped inside a large shared database—probably being of the Oracle or IBM variety. Because of this, your new microservices will be dependent on retrieving data from a large shared database.

Refactoring your monoliths to microservices will take time. The data migration is going to be an immediate challenge, as parts of the monolith will involve access to data inside a large shared database. There are different approaches to managing this, depending on the severity of risk tolerable for the system.

Microservices can reach into the legacy system and fetch data in the same way that front-end applications do. While this isn’t a good long-term strategy, it can be an intermediate step in gaining control over the legacy backend.

Extending Domain Data

One method for handling legacy data source integration for microservices is to extend domain data. In this approach, we gain the benefit of agility in microservices by extending base domain objects retrieved from a legacy system. Going back to the primary goal of building microservices, we’re doing it to gain speed and agility by being able to make changes quickly and having control to deploy changes to production continuously. We are still able to gain this benefit by extending domain data that can be retrieved from a legacy system.

Extending domain data

Suppose you’re developing software for a bank. You’ve been tasked with building a microservice that will wrap around an existing customer domain object. If you want to extend that customer object to include new fields for a feature, you can just persist the new fields of the customer object to the microservice’s database. Now when a customer domain object is requested through your microservice, a call to the legacy system will retrieve the base customer object, and any new extended fields are retrieved from your microservice’s database. The new fields are combined with the base customer object before being returned as a single domain object to consumers.

There are a few pros and cons to this approach.

Pros:

  • The legacy system does not need to be altered to support the development of new microservices

  • New features can be deployed independently without being tightly coupled to the legacy system

  • It ensures that any existing calls to a legacy web service are unaltered for other apps

Cons:

  • Scalability may be a concern in the case that the base legacy service is not cloud-native

  • Availability will be impacted if the base legacy service’s shared database suffers an outage

  • The dependency on the legacy system’s shared database is increased, making it harder to decompose

Legacy SOA Integration

Eventually, the new features that extend the base customer object will need to be consumed by legacy applications. Any new applications that consume microservices will have the benefit of using newer libraries that provide REST clients to make integration relatively straightforward. The same will not be true for legacy applications. The existing legacy system needs to be able to consume the new microservices without having to upgrade an application framework, platform, or library.

Legacy SOAP web services and shared database

After a new microservice has been released—which extends domain data from the legacy system—we need to start migrating all applications to consume the new microservice. The benefit we gained with the extension approach was agility (and that’s good!), but we should be decreasing our reliance on a large shared database for the microservice approach. To do this, we need to create an application that will sit on the edge of our legacy system and provide integration services to legacy applications that need to consume data from our new microservices.

Legacy edge service

To consume the new microservices from existing applications, you should think about limiting the amount of time spent on legacy modernization. Instead, there should be a focus on directing any available work cycles towards permanently reducing the risk per deployment—achieved by adopting a system that enables deploying changes more often. To support this goal, we can impose the following principles of integration for both legacy and microservices.

  • Legacy applications should be able to consume new microservices without being upgraded

  • Microservices should be the only direct consumer of existing legacy web services

The Legacy Edge Service application will act as an adapter to support the expected contract and messaging protocols of existing legacy applications. The functions of the Legacy Edge Service will, in some ways, resemble functions and features that you would get from an ESB (Enterprise Service Bus).

The Dreaded ESB

The diagram below shows a view of the Customer Service connected to an ESB. This example is a familiar pattern of architecture for an SOA, where the ESB handles centralizing the integration concerns of applications.

Legacy web services SOA and ESB

In this scenario, we would have already re-routed any point-to-point calls between web services through the ESB. The ESB acts as a gateway, router, and transport layer for orchestrating and composing larger business services from components existing as backend services. These larger composite business services are monoliths in the SOA that can be decomposed into microservices. There will also be front-end applications that consume the business services through the ESB. These applications will also need to be decomposed into microservices, where self-contained business logic or any ownership of domain data exists.

In this scenario, when new microservices are ready to start taking over ownership of domain data, we can use the ESB to switch incoming requests from the underlying service components to microservices.

Legacy edge service ESB adapter

In the diagram above we see the Legacy Edge Service is providing adapter support to the ESB, which can be used to support any existing service orchestration or business process. As microservices begin replacing larger service units of the SOA, the ESB will switch traffic to the Legacy Edge Service and consume microservice routes that wrap around legacy web services that are being strangled.

It’s important that we can simply update the route configuration of existing legacy applications, making it possible to consume microservices without making source code changes. By doing this, we can defer modernizing legacy applications until they are ready to be refactored into microservices. The Legacy Edge Service provides this significant benefit.

Spending time on legacy modernization is usually not a differentiator for the business. Instead, focus that engineering effort on tasks that help the business gain more agility—by strangling legacy applications with new microservices.

Transferring Data Ownership

After legacy consumers are re-routed to the Legacy Edge Service, the only consumer of a legacy web service component will be a microservice. Because of this, we are now able to shift the system of record safely for domain data retrieved from the legacy web services. With each unique request to a microservice that touches data in the legacy system, we can move ownership of that data to the new system of record—our microservices.

A microservice will transfer ownership of domain data by persisting the response from the legacy system to its exclusive database. Afterward, any subsequent request to the microservice for the same domain data will not require a call back to the legacy system. Utilizing this technique, we can slowly strangle the monolithic service components of the legacy system by transitioning the system of record for newly requested domain data.

Cloud Native Applications

As the volume of calls from microservices to the legacy system decreases over time, microservices can begin to take advantage of the other benefits of cloud-native architectures.

The approach here is similar to how web applications cache responses. We can consider for each call to the legacy system that we’re inevitably interacting with some vertically scaled infrastructure that has finite capacity. For each unique incoming request, we incur one hit on the legacy system, which can unfurl into many hits incurred on the vertically scaled infrastructure running on-premises. For every incoming non-unique request, we’ll incur one or more hit to cloud-native applications that are running on horizontally scaled infrastructure in the cloud. With this method, we’ll see the on-premises compute sharply drop off—as more capacity is demanded—in favor of elastic compute served from the public cloud.

Monolith to Microservice

The method that I explained above came to me about a year after completing a greenfield microservices project on a similar architecture. The project would be a pilot for building microservices that would extend legacy components of a retail banking platform—a system that was already serving millions of users in production.

The result of the project was a success, as we realized the direct benefits of being agile with the microservices approach. While we were able to deliver business differentiating features quickly, our speed to market came at the cost of tightly coupling microservices to the existing components of the legacy system.

There were a few factors that required us to create this tight coupling.

  • We shackled ourselves into vertically scaled infrastructure provisioned in a private data center

  • We didn’t have a platform that supported cloud-native application development

  • We didn’t have a self-service tool in place to automate provisioning of databases for new microservices

Because of these factors, we had to use the legacy system’s large shared database for persistence by our new microservices. We would use database access control features to isolate our microservice’s tables from being directly accessed by other applications. Even though these access features are for multitenancy, it would allow us to migrate the schema easily to a separate database at a later time.

The fundamental issue with this approach was that it took us seven months to get the first microservice release into production. The early dependency on the shared database posed too much of a risk of impacting millions of production users. We realized that risk when we discovered a framework defect that caused our new microservices to be unable to release database cursors when undergoing stress testing in the performance environment. The lesson learned in this experience was an important one.

A new microservice should encapsulate both the unit of service and the unit of failure—in production—on the very first day of development.

When I say unit of service and unit of failure I am referring to a quote by storied computer scientist, Jim Gray. Gray wrote a technical report in 1985 titled Why Do Computers Stop and What Can Be Done About It?

In the report, Gray talks about how to achieve fault-tolerance in software.

As with hardware, the key to software fault-tolerance is to hierarchically decompose large systems into modules, each module being a unit of service and a unit of failure. A failure of a module does not propagate beyond the module.
— Jim Gray

When I hear thought leaders talk about microservices and say that the ideas are not new, I always think back to this quote by Jim.

Reference Architecture

The architecture for the reference application consists of multiple application layers deployed to infrastructure that is both on-premises and in the public cloud. This example works equally well when all your applications have been migrated to the cloud, but the reality is that this is almost never the case. We’ll assume throughout this example that the only viable path for you to adopt cloud-native microservices at your company is to go with a hybrid approach.

Example Cloud Native Strangler Microservice Architecture

The reference architecture in this diagram was modeled after a real world example of a hybrid cloud approach that uses Cloud Foundry for operating cloud-native applications. The Cloud Foundry deployment uses a network bridge to connect to legacy systems deployed in a data center. Applications are categorized into different zones, which are deployed to infrastructure that is either in the public cloud or a data center.

Public Cloud

Three zones represent applications deployed to the public cloud. These three areas separate applications into distinct categories. The difference between the zones has to do with how applications are located.

  • Public Internet – Front-end applications and public REST APIs

  • Platform Services – Managed cloud platform services

  • Service Discovery – Microservices that subscribe to a discovery service

The Public Internet Zone uses routing from the cloud platform, requiring a server-side load balancer to route requests from the public internet to your applications.

The Platform Services Zone consists of services that are explicitly bound to applications deployed using the platform. The platform services are found through this relationship between platform and application.

The Service Discovery Zone consists of microservices that can only be discovered through the use of a discovery service. To locate other microservices, the platform provides a discovery service, which can be used to find the address of applications in the service discovery zone.

Data Center

There is one zone in the data center, and that is the Legacy Application Zone. Microservices deployed in the public cloud will need to connect to web services deployed to infrastructure in this zone. While over time the business logic that exists in the legacy zone will be refactored into microservices, the migration of your domain data may require extracting and moving data that is stored in a large shared database. To solve this problem and keep the system online is similar to swapping out the engine of an airplane while still in flight.

Reference Applications

The source code in this reference consists of eight separate applications. Each application in this example is built with Spring Boot and Spring Cloud.

  • Legacy Applications

    • Customer Service

    • Legacy Edge Service

  • Microservices

    • Discovery Service

    • Edge Service

    • Config Service

    • User Service

    • Profile Service

    • Profile Web

Legacy Applications

The complexity of real world legacy systems will no doubt be more complicated than this scale model. This example application contains the minimum number of applications to demonstrate the strangler integration pattern.

Customer Service

The Customer Service is a Spring Boot application that simulates a typical SOA web service by exposing a single SOAP endpoint for retrieving a customer domain object.

@Endpoint
public class CustomerEndpoint {
    private static final String NAMESPACE_URI = "http://kennybastani.com/guides/customer-service";

    private CustomerRepository customerRepository;

    @Autowired
    public CustomerEndpoint(CustomerRepository customerRepository) {
        this.customerRepository = customerRepository;
    }

    @PayloadRoot(namespace = NAMESPACE_URI, localPart = "getCustomerRequest")
    @ResponsePayload
    public GetCustomerResponse getCustomer(@RequestPayload GetCustomerRequest request) {
        GetCustomerResponse response = new GetCustomerResponse();
        response.setCustomer(customerRepository.findCustomer(request.getUsername()));
        return response;
    }

    @PayloadRoot(namespace = NAMESPACE_URI, localPart = "updateCustomerRequest")
    @ResponsePayload
    public UpdateCustomerResponse updateCustomer(@RequestPayload UpdateCustomerRequest request)
            throws SOAPException {
        UpdateCustomerResponse response = new UpdateCustomerResponse();
        response.setSuccess(customerRepository.updateCustomer(request.getCustomer()) > 0);
        return response;
    }
}

The CustomerEndpoint has a GetCustomer method that is mapped to a SOAP request payload at /v1/customers. The input parameter for this request is simply the username of the customer. The username will be used to lookup the record from a "large shared database", which will be retrieved using JdbcTemplate.

public Customer findCustomer(String username) {
    Assert.notNull(username);

    Customer result;

    result = jdbcTemplate
            .query("SELECT id, first_name, last_name, email, username FROM customer WHERE username = ?",
                    new Object[]{username},
                    (rs, rowNum) -> {
                        Customer customer = new Customer();
                        customer.setFirstName(rs.getString("first_name"));
                        customer.setLastName(rs.getString("last_name"));
                        customer.setEmail(rs.getString("email"));
                        customer.setUsername(rs.getString("username"));
                        return customer;
                    }).stream().findFirst().orElse(null);

    return result;
}

To retrieve a response from this service we will make a POST request of the content type xml/text to the endpoint /v1/customers.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                  xmlns:gs="http://kennybastani.com/guides/customer-service">
   <soapenv:Header/>
   <soapenv:Body>
      <gs:getCustomerRequest>
         <gs:username>user</gs:username>
      </gs:getCustomerRequest>
   </soapenv:Body>
</soapenv:Envelope>

An XML response is then returned from the SOAP request and looks like the following result.

<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
    <SOAP-ENV:Header/>
    <SOAP-ENV:Body>
        <ns2:getCustomerResponse xmlns:ns2="http://kennybastani.com/guides/customer-service">
            <ns2:customer>
                <ns2:username>user</ns2:username>
                <ns2:firstName>John</ns2:firstName>
                <ns2:lastName>Doe</ns2:lastName>
                <ns2:email>john.doe@example.com</ns2:email>
            </ns2:customer>
        </ns2:getCustomerResponse>
    </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

The Customer Service here is simulating one of the most depended upon applications deployed on-premises. Simply modernizing this application will not solve the most important problem. The problem is that this service has to maintain expectations with mostly all critical applications that are running in production. Simply replacing this application with a microservice would mean updating every one of the applications that are depending on it.

To reduce the risk of disrupting applications relying on this service, we can start building a microservice that extends its customer object with new features. As legacy applications begin to be updated to use the new features of the microservice, we can create a Legacy Edge application that gives older applications a way to consume microservices without needing to modernize.

Legacy Edge Service

The Legacy Edge Service is an API gateway that maps requests from legacy applications to responses from microservices—while adhering to the messaging protocol expectations of the legacy consumers. This service will contain no business logic. The purpose of this service is vital to being able to rewire direct connections away from the Customer Service, so that we can safely transition the system of record for a customer’s domain data to a new microservice.

Legacy edge microservice

Remember, we want to be able to make the switch as seamless as possible without performing any risky data migrations. This is the service that allows us to be able to do that.

@Endpoint
public class CustomerEndpoint {
    private static final String NAMESPACE_URI = "http://kennybastani.com/guides/customer-service";

    private OAuth2RestOperations restTemplate;

    @Autowired
    public CustomerEndpoint(OAuth2RestOperations oAuth2RestTemplate) {
        this.restTemplate = oAuth2RestTemplate;
    }

    ...
}

In the code snippet above, the Legacy Edge Service replicates the same SOAP endpoint as the Customer Service, with a few differences. There is a new microservice called the Profile Service that is protected with OAuth2 authorization. We should not expect that each of our legacy applications will be able to support the OAuth2 client specification without extensive changes. The legacy edge service will take care of these concerns for us.

@PayloadRoot(namespace = NAMESPACE_URI, localPart = "getCustomerRequest")
@ResponsePayload
public GetCustomerResponse getCustomer(@RequestPayload GetCustomerRequest request) {
    GetCustomerResponse response = new GetCustomerResponse();

    // Get customer object from profile microservice
    response.setCustomer(Optional.ofNullable(
            restTemplate.getForObject("http://profile-service/v1/profiles/{username}",
                    Customer.class, request.getUsername()))
            .map(c -> c).orElse(null));

    return response;
}

Here we see that the Legacy Edge Service uses the Discovery Service to make a request to an OAuth2 protected resource on the Profile Service. This microservice returns a Profile object, which is the extended domain object for Customer. The legacy edge service just translates the new Profile object to the expected Customer object and returns it as a SOAP response to consumers.

By providing this endpoint, no legacy applications will need to be upgraded. To start consuming microservices all we need to do is point a legacy application to the Legacy Edge Service instead of the Customer Service.

This example has a Spring Cloud Security OAuth2 resource and authorization server, which is a microservice named the User Service.

The Legacy Edge Service will use an authorization grant type of client_credentials to access the protected resources of a microservice. The beautiful thing about Spring Cloud Security is that each microservice will call back to the User Service to validate an access token before being granted with access to a protected resource. This process federated authorization for microservices is called token relay.

Any requests that come in from the public internet zone will need to authenticate using an authorization grant type of authorization_code. This grant type differs from client_credentials as it forces public internet users to authenticate through a provided login form before being granted an access token.

Since we cannot expect legacy applications to implement the OAuth2 workflow, the Legacy Edge Service uses the client_credentials grant type to request access tokens on behalf of the legacy system.

Microservices

The microservices in this example are built using Spring Boot and Spring Cloud.

Discovery Service

The Discovery Service is a platform service that maintains a service registry that is redistributed to applications in the Service Discovery Zone. For this example, we’ll stand up a Eureka Server from the Spring Cloud Netflix.

Discovery service

Edge Service

The Edge Service is a platform service that uses the service registry from the Discovery Service to provide a public API gateway to the REST APIs exposed by the microservices. We’re using this Edge Service in a similar way as the Legacy Edge Service, but exposing it to the public internet zone. The Edge Service will compose each microservice into a single unified REST API that enforces the OAuth2 client specification. Any consumer of this service will be forced to use the authorization_code grant type, which requires user-level authentication.

For the Edge Service, we are again drawing from the Spring Cloud Netflix project to embed a Zuul reverse proxy that acts as a single gateway to each microservice. For front-end applications, we can bind to this Edge Service and use it as a single REST API that provides endpoints for every independent microservice.

Edge service

User Service

The User Service is a platform service that contains an OAuth2 authorization and resource server for accessing any protected resources of our microservices. The User Service will manage and secure how all consumers can access resources from our microservices. Here we are using Spring Cloud Security OAuth2 to issue and validate access tokens for each microservice. The added benefit of using Spring Cloud Security is that access tokens will be relayed in requests microservice-to-microservice, securing an entire chain of requests for resources as a feature of the application framework.

User service

Profile Service

The Profile Service is a microservice that extends the domain data of the legacy Customer Service. This is the microservice that is strangling the domain data of the Customer Service—and in the process—slowly transitioning the system of record away from the large shared database in the legacy system. The Profile Service exposes protected domain resources as a REST API, using the Spring Cloud Security project to implement the OAuth2 client workflow.

Profile service
@RestController
@RequestMapping(path = "/v1")
public class ProfileControllerV1 {

    private ProfileServiceV1 profileService;

    @Autowired
    public ProfileControllerV1(ProfileServiceV1 profileService) {
        this.profileService = profileService;
    }

    @RequestMapping(path = "/profiles/{username}", method = RequestMethod.GET)
    public ResponseEntity getProfile(@PathVariable String username) throws Exception {
        return Optional.ofNullable(profileService.getProfile(username))
                .map(a -> new ResponseEntity<>(a, HttpStatus.OK))
                .orElseThrow(() -> new Exception("Profile for user does not exist"));
    }

    @RequestMapping(path = "/profiles/{username}", method = RequestMethod.POST)
    public ResponseEntity updateProfile(@RequestBody Profile profile) throws Exception {
        return Optional.ofNullable(profileService.updateProfile(profile))
                .map(a -> new ResponseEntity<>(a, HttpStatus.OK))
                .orElseThrow(() -> new Exception("Profile for user does not exist"));
    }
}

In the code snippet above we see the ProfileControllerV1 class, which is a REST controller that provides an endpoint for retrieving the Profile of a user. The Profile object we are retrieving here will extend fields from the Customer object, after retrieving domain data from the legacy Customer Service. To do this, we will call directly to the Customer Service in the legacy application zone using a SOAP client.

public class CustomerClient extends WebServiceGatewaySupport {

    private static final Logger log = LoggerFactory.getLogger(CustomerClient.class);

    private static final String ROOT_NAMESPACE = "http://kennybastani.com/guides/customer-service/";
    private static final String GET_CUSTOMER_NAMESPACE = "getCustomerRequest";
    private static final String UPDATE_CUSTOMER_NAMESPACE = "updateCustomerRequest";

    public GetCustomerResponse getCustomerResponse(String username) {
      ...
    }

    public UpdateCustomerResponse updateCustomerResponse(Profile profile) {
      ...
    }
}

In the snippet above we find the definition of the CustomerClient. This class will provide the Profile Service with a capable SOAP client that can retrieve a Customer record from the legacy Customer Service. We’ll use this client from the ProfileServiceV1 class below to retrieve the Customer domain data that we will be extending in the Profile object.

@Service
public class ProfileServiceV1 {

    private ProfileRepository profileRepository;
    private CustomerClient customerClient;

    @Autowired
    public ProfileServiceV1(ProfileRepository profileRepository, CustomerClient customerClient) {
        this.profileRepository = profileRepository;
        this.customerClient = customerClient;
    }

    public Profile getProfile(String username) {
        ...
    }

    public Profile updateProfile(Profile profile) {
        ...
    }
}

The code snippet above contains the definition of the ProfileServiceV1 class. This bean will conditionally call the legacy Customer Service by making a SOAP request from the CustomerClient. The getProfile method is called by the ProfileControllerV1 class, returning a Profile object that extends domain data from the legacy Customer object.

public Profile getProfile(String username) {

    // Check for the profile record
    Profile profile = profileRepository.getProfileByUsername(username);

    // If the profile does not exist in the repository, import it from the SOAP service
    if (profile == null) {
        // Request the customer record from the legacy customer SOAP service
        profile = Optional.ofNullable(customerClient.getCustomerResponse(username)
                .getCustomer())
                .map(p -> new Profile(p.getFirstName(), p.getLastName(),
                        p.getEmail(), p.getUsername()))
                .orElseGet(null);

        if (profile != null) {
            // Migrate the system of record for the profile to this microservice
            profile = profileRepository.save(profile);
        }
    }

    return profile;
}

As a part of this workflow, the Profile Service looks to its attached MySQL database using the ProfileRepository to find a Profile record with username as the lookup key. If the Profile for the requested user does not exist in the database, a request to retrieve the Customer object is made to the Customer Service. If the Customer Service returns a Customer record in the response, the base domain data returned from the legacy Customer Service will be used to construct a new Profile record, which is consequently saved by the Profile Service to the attached MySQL database.

Using this workflow, the Profile Service only needs to call the legacy system once for each Profile that is requested. Since we’ve re-routed all requests from other legacy applications to use the Legacy Edge Service, we can safely transition the system of record for domain data away from the legacy Customer Service without performing any risky database migrations. Further, to support backward compatibility in the "large shared database", we can replicate any updates to the base Customer domain data by scheduling tasks asynchronously to call the Customer Service when a change is made to a Profile.

public Profile updateProfile(Profile profile) throws IOException {

    Assert.notNull(profile);

    // Get current authenticated user
    User user = oAuth2RestTemplate.getForObject("http://user-service/uaa/v1/me", User.class);

    // Get current profile
    Profile currentProfile = getProfile(user.getUsername());

    if (currentProfile != null) {
        if (currentProfile.getUsername().equals(profile.getUsername())) {
            // Save the profile
            profile.setId(currentProfile.getId());
            profile.setCreatedAt(currentProfile.getCreatedAt());
            profile = profileRepository.save(profile);

            // Replicate the write to the legacy customer service
            amqpTemplate.convertAndSend("customer.update",
                    new ObjectMapper().writeValueAsString(profile));
        }
    }

    return profile;
}

The snippet above is the implementation of updateProfile. In this method we are receiving a request to update the profile of a user. The first step is to ensure that the profile being modified is the user who is currently authenticated. To make sure that only a user that owns the profile can update the domain resource, we check to see if the requested change is different from the profile of the authenticated user.

To support backward compatibility with the legacy system, we’ll need to support a different workflow for validating the authenticated user, since the Legacy Edge Service uses client_credentials for authorization.

After updating the profile, we need to replicate the write back to the customer service. To make sure that the cloud-native application is able to scale writes without dependency on the legacy system, we want to be able to durably replicate the write in an async workflow. By sending a durable message to a RabbitMQ queue, we can use the Profile Service to send back updates to the Customer Service asynchronously without tying up thread and memory resources of the web server.

@RabbitListener(queues = {"customer.update"})
public void updateCustomer(String message) throws InterruptedException, IOException {
    Profile profile = objectMapper.readValue(message, Profile.class);

    try {
        // Update the customer service for the profile
        UpdateCustomerResponse response =
                customerClient.updateCustomerResponse(profile);

        if (!response.isSuccess()) {
            String errorMsg =
                    String.format("Could not update customer from profile for %s",
                            profile.getUsername());
            log.error(errorMsg);
            throw new UnexpectedException(errorMsg);
        }
    } catch (Exception ex) {
        // Throw AMQP exception and redeliver the message
        throw new AmqpIllegalStateException("Customer service update failed", ex);
    }
}

The snippet above is our message listener on the Profile Service that will asynchronously issue writes back to the Customer Service in the legacy system. Since the network is prone to failure, this workflow ensures that there will be no loss of data since the RabbitMQ message can only be acknowledged after an attempt to update the Customer Service was a success.

Profile Web

The Profile Web microservice is a front-end Spring Boot application that houses the static content of an AngularJS website. The Profile Web application will bind to the Edge Service and embed its API gateway using Spring Cloud Netflix’s Zuul as a reverse proxy. By embedding the Edge Service into the Profile Application, the client-side JavaScript of the AngularJS website will not need to request resources from a separate domain. The Edge Service will be made available as an endpoint at /api/**.

Running the Example

There are two ways to run the reference application, with either Docker Compose or Cloud Foundry, the latter of which can be installed on a development machine using PCF Dev. Since the distributed application is designed to be cloud-native, there is a lot to be gained from understanding how to deploy the example using Cloud Foundry.

The source code for the reference application is available on GitHub at https://github.com/kbastani/cloud-native-microservice-strangler-example. Clone the repository and run the example using the directions below.

Docker Compose

To run the example using Docker Compose, a run.sh script is provided which will orchestrate the startup of each application. Since the example will run 8 applications and multiple backing services, it’s necessary to have at least 9GB of memory allocated to Docker.

The run.sh script is designed to use Docker Machine, so if you’re using Docker for Mac, you’ll need to modify the run.sh script by setting DOCKER_IP to localhost.

Cloud Foundry

To run the example using Cloud Foundry, a deploy.sh script is provided which will orchestrate the deployment of each application to a simulated cloud-native environment. If you have enough resources available, you can deploy the example on Pivotal Web Services. If you’re new to Cloud Foundry, it’s highly recommended that you go with the PCF Dev approach, which you can install by following the directions at https://docs.pivotal.io/pcf-dev/.

When you have a CF environment to deploy the example, go ahead and run the deploy.sh script in the parent directory of the project. The bash script is commented enough for most to understand the steps of the deployment. Each Cloud Foundry deployment manifest is located in the directory of the application and is named manifest.yml. The script will deploy the Spring Cloud backing services first, and afterward, each microservice will be deployed one by one until each application is running.

Managing Profiles

While the example project contains 8 separate applications, the only front-end application is the Profile Web microservice. If you’re running the example on PCF Dev, you can access the Eureka dashboard at http://discovery-service.local.pcfdev.io/.

Eureka Dashboard

If the applications listed in the Eureka dashboard looks like the example above, then the deployment was successful. To access the Profile Web application, go to http://profile-web.local.pcfdev.io/. You’ll be immediately redirected to the OAuth2 gateway’s login form on the User Service.

OAuth2 user login

The User Service is only configured to allow one user to sign-in. To login, use the very secure credentials of user and password, and you’ll be redirected back to the Profile Web application.

Manage profile

The Profile Web application will then allow you to update the current user’s profile information. Going back to earlier when we talked about the workflow for getting the profile information from the Customer Service, by the time the data is loaded, the workflow is complete. The Profile Web application will call the Profile Service through the embedded Edge Service application’s API gateway. The Profile Service will then check to see if the user’s profile information is stored in its database. If the data is unavailable, it will call the Customer Service using the SOAP client and then import the profile information by saving it to its database.

Now that the domain data has been migrated from the Customer Service, we need to verify that any updates from the Profile Web application find their way back into the legacy application’s shared database. To verify this, go ahead and update the fields on the Profile Web application’s UI.

Update profile

Here I’ve updated the default user to my own profile information, and the result was successful. To verify that the legacy system is in sync with the microservices, we can send an HTTP SOAP request to the Customer Service.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
      xmlns:gs="http://kennybastani.com/guides/customer-service">
   <soapenv:Header/>
   <soapenv:Body>
      <gs:getCustomerRequest>
         <gs:username>user</gs:username>
      </gs:getCustomerRequest>
   </soapenv:Body>
</soapenv:Envelope>

Using a REST client, send a POST request with the XML snippet above to the Customer Service at http://customer-service.local.pcfdev.io/v1/customers/user.

The REST client that you will use needs to send the POST request with a content type of xml/text, since the legacy service uses SOAP as the messaging protocol.
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
    <SOAP-ENV:Header/>
    <SOAP-ENV:Body>
        <ns2:getCustomerResponse xmlns:ns2="http://kennybastani.com/guides/customer-service">
            <ns2:customer>
                <ns2:username>user</ns2:username>
                <ns2:firstName>Kenny</ns2:firstName>
                <ns2:lastName>Bastani</ns2:lastName>
                <ns2:email>kenny.bastani@example.com</ns2:email>
            </ns2:customer>
        </ns2:getCustomerResponse>
    </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

You should see something similar to the response above. The Customer Service has no awareness of the new Profile Service and will only return back a response from the large database that it shares with other legacy applications.

Conclusion

Now that we’ve explored the reference application, there are a few extra things to be mindful of as you tackle the challenges that come with implementing this hybrid microservice approach. One concern is that the legacy system will still need to be able to mirror any updates to domain data from new microservices. To do this, we used a RabbitMQ message broker that can durably store ordered messages and asynchronously apply updates back to the legacy system. This method will be eventually consistent, which requires additional scrutiny when it comes to handling state.

Be mindful of state

Some domain objects will come from the legacy system that will contain stateful properties. It’s much safer to migrate domain data that is stateless—but in reality, that’s an uncommon occurrence. Any field of an object that represents state will have a dependency on business logic. That’s an important consideration with this approach, so be mindful of the following.

  • Never deploy a new feature without ensuring backward compatibility with the legacy system

  • Identify fields sourced from the legacy system that represents state

  • Never store state as fields in microservices—store state as events

Be mindful of consistency

To be successful with this method and make the minimum amount of changes to the legacy system, you’ll need to replicate updates to domain data durably back to the legacy system. With microservices we need to embrace eventual consistency, so be mindful of the following.

  • Respect foreign key relationships in a monolith’s database

  • Work to decouple table constraints that block updates through a legacy web service

  • Move all legacy field validators (including database constraints) into your microservices

  • Make sure updates sent to a legacy service from a microservice are always able to succeed eventually

Be mindful of observability

You can’t fix what you can’t measure. From the first day you’re in production, you should have maximal visibility into how your microservices are handling data capture from the legacy system. To increase observability, keep the following considerations in mind.

  • Monitor for failures during an attempt to update the legacy system from a microservice

  • Be quick to analyze and remediate failures that are blocking a microservice from replicating updates to the legacy system

Be mindful of resiliency

Always account for the inevitable failures that will come with the first few iterations of integrating your new microservice with the legacy system. Use circuit breakers in Spring Cloud Netflix to create fallback plans that temporarily escalate the privilege of your microservice to be able to interact safely with a legacy data source in the event of repeated failure.

  • Make sure to provide mechanisms to override unknown constraints that block updates to the legacy system

  • Build in escalated fallback measures in the case that a microservice repeatedly fails to update a legacy data source

  • Make sure to performance test connecting to a shared database from your microservice in any fallback scenario

  • Spare no effort to prevent data loss by persisting any changes scheduled for the legacy system into durable storage

No comments :

Post a Comment

Be curious, I dare you.