This blog post will introduce you to building service block architectures using Spring Cloud Function and AWS Lambda.
What is Spring Cloud Function?
Spring Cloud Function is a project from Pivotal that brings the same popular fundamentals behind Spring Boot to serverless functions.
Service Block Architecture
One of the most important considerations in software design is modularity. If we think about modularity in the mechanical sense, components of a system are designed as modules that can be replaced in the event of a mechanical failure. In the engine of a car, for example, you do not need to replace the entire engine if a single spark plug fails.
In software, modularity allows you to design for change.
Modularity also gives developers a shared map that can be used to reason about the functionality of an application. By being able to visualize and map out the complex processes that are orchestrated by an application’s source code, developers and architects alike can more easily visualize where to make a change with surgical precision.
Changing software
In many ways, we should consider ourselves lucky to be building software instead of cars. Some of today’s most valuable companies are created using bits and bytes instead of plastic and metal. But despite these advances, the very best car company releases less often than the world’s very worst software company.
An application’s source code is a system of connected bits and bytes that is always evolving—one change after another. But, as the source code of a system expands or contracts, small changes require us to build and deploy entire applications.
To make one small code change to a production environment, we are required to deploy everything else we didn’t change.
When teams share a deployment pipeline for an application, teams become forced to plan around a schedule they have little or no control over. For this reason, innovation is stifled—as developers must wait for the next bus before they can get any feedback about their changes.
The result of building microservices is an ever increasing number of pathways to production. With more and more microservices, the amount of unchanged code per deployment decreases when measured across all applications. It’s the decomposition in microservices that ends up breeding lower unchanged code deployed over time—an important metric. Serverless functions can help to get this number even lower—as the unit of change becomes the function. But, how do microservices and serverless functions fit together?
Service Blocks
Service blocks are cloud-native applications that share many characteristics with microservices. The key difference with microservices is that a service block is a self-contained system that has multiple independently deployable units—mixing together serverless functions with containers.
While microservices can be created entirely as serverless functions, a service block focuses on a contextual model that combines together traditional "always-on" applications with portable on-demand functions.
The Patterns
The basic pattern of a service block combines a core application running in a container with a collection of serverless functions.
A basic service block will contain a single Spring Boot application (service core) that communicates with serverless functions.
In this post we will focus on a basic service block, which are composed of two things:
-
Service Cores
-
Functions
Service Cores
Each service block will have a primary application container that communicates with other backing services, such as a database or a message broker. These application containers are called service cores. Cores are responsible for dispatching events to serverless functions that are deployed inside of the boundary of a service block.
In the diagram above, you’ll see a service core that is sending events to two different functions deployed to AWS Lambda. For this example, the functions contain the business logic for most of the application. The State Machine Function includes the recipe for each domain aggregate. This function will use event sourcing to replicate the current state of domain aggregates from a stream of events, which is an approach called event sourcing.
The Metrics Function does something similar. Each instance of a service core will emit operational events to the Metrics Function. These metrics can then be event sourced into reactive views that are exposed as a REST API to service consumers. You can also feed these events into an operational matrix of functions that can be used to automate tasks that keep each application instance healthy.
Anatomy of a Function
The anatomy of a basic Spring Cloud Function project is quite simple.
@SpringBootApplication
public class MetricsFunction {
public static void main(String[] args) {
SpringApplication.run(MetricsFunction.class, args);
}
@Bean
public Function<MetricEvent, View> function(MongoTemplate mongoTemplate) {
return metricEvent -> {
// Get the event's key to lookup a view
String key = metricEvent.getKey();
// Find the view's document if it exists, if not, insert a new one
Query updateQuery = new Query(Criteria.where("_id").is(key));
// Increment the event's match count
Update update = new Update().inc("matches", 1)
.set("lastModified", metricEvent.getLastModified());
// Apply the increment or insert a new document and return the result
View viewResult = mongoTemplate.findAndModify(updateQuery, update,
new FindAndModifyOptions().returnNew(true).upsert(true), View.class);
if (viewResult.getMatches() <= 1) {
mongoTemplate.save(viewResult);
}
return viewResult;
};
}
}
The example above is a Spring Boot application that uses Spring Cloud Function to collect metric events from a service core and update a view in a MongoDB database. Now, going back to the diagram from earlier, we can begin to connect the dots on how events get generated from the service core.
Incoming requests to the service core will come in the form of commands. These commands map to Spring MVC controllers that will emit events to functions that are deployed to AWS Lambda. The service core in this example shares a MongoDB database with its functions. By sharing this data source, view updates can be subscribed to without waiting for a response to return from a Lambda function.
Further, with reactive repository support in MongoDB, consumers can reactively monitor for events in real-time from AWS Lambda functions.
@GetMapping(value = "/metricEvents", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<MetricView>> streamEvents(@PathVariable String key,
HttpServletRequest request) {
// Stream the events from MongoDB
Flux<MetricView> events = eventRepository.findByKey(key);
// Check if this is an SSE reconnection from a client
String lastEventId = request.getHeader("Last-Event-Id");
// On SSE client reconnect, skip ahead in the stream to play back only new events
if (lastEventId != null)
events = events.skipUntil(e -> e.getId().equals(lastEventId)).skip(1);
// Subscribe to the tailing events from the reactive repository query
return events.map(event -> ServerSentEvent.builder(event)
.event(s.getCreatedDate().toString())
.id(event.getId())
.build())
.delayElements(Duration.ofMillis(100));
}
In the example above, you’ll see a controller method that returns a Flux
of ServerSentEvent<MetricView>
. This method sits on the service core, and will monitor a MongoDB collection for new events and emit them every 100ms. A Server-Sent Event is a technology that allows consumers to subscribe to events emitted by an HTTP server. In the case that an HTTP disconnect occurs, which is a frequent scenario, the client will send another request with the Last-Event-Id
field in the headers. This allows the reactive event stream to resume where it last left off.
Example Project
I’ve put together an example project that demonstrates the basics of a service block architecture with Spring Cloud Function. This will be the first of multiple examples, each demonstrating a different service block pattern. For this first service block, we’ll create an account service that dispatches events to a Spring Cloud Function app on AWS Lambda.
The concerns we’ll be going over in this post:
-
Service Core
-
Functions
-
Deployment
-
Lambda Invocation
Service Core
In this example, an account service core allows consumers to manage records using a workflow that is common to CQRS and Event Sourcing applications.
In the diagram above you’ll see an account resource that is connected to a set of commands and events. One of the goals in the account service core is to enable a CQRS workflow for interacting with domain aggregates. To make it easy for other microservices to consume this event-driven workflow, we can conveniently embed hypermedia links to both the event log and commands.
Creating an Account
The first concern we should address in the service core is to create an endpoint for creating new accounts.
AccountController
class@RestController
@RequestMapping("/v1")
public class AccountController {
@PostMapping(path = "/accounts")
public ResponseEntity createAccount(@RequestBody Account account) {
return Optional.ofNullable(createAccountResource(account))
.map(e -> new ResponseEntity<>(e, HttpStatus.CREATED))
.orElseThrow(() -> new RuntimeException("Account creation failed"));
}
The snippet above is from the service core’s AccountController class. Let’s see what happens when we try to create a new Account
over HTTP.
/v1/accounts
HTTP POST /v1/accounts
{
"firstName": "Taylor",
"lastName": "Swift",
"email": "tswift@cloud.com"
}
In the snippet above, we’ve sent a POST request with the information of the new account we’d like to create. After sending the request, we’ll get back an Account resource that contains the newly minted account.
{
"createdAt": 1491473123758,
"lastModified": 1491473123758,
"firstName": "Taylor",
"lastName": "Swift",
"email": "tswift@cloud.com",
"status": "ACCOUNT_CREATED",
"_links": {
"commands": {
"href": "http://localhost:8080/v1/accounts/1/commands"
},
"events": {
"href": "http://localhost:8080/v1/accounts/1/events"
}
},
"accountId": 1
}
Here we see the Account
that we just created. Notice that there are two hypermedia links in the response body for the _links
property. We can think of these hypermedia links as if they were methods in the Account
class. If we want to access the available commands for an Account
, we can simply traverse the link for commands
, which returns a response containing the commands that can be executed on the account.
By keeping the event log attached as a link on the account resource, all consumers will be able to easily locate the events that have affected the account’s current state. |
The Commands
Next, let’s fetch the commands that are available for the Account
resource. To do this, we’ll send an HTTP GET request to the location href
listed on the hypermedia link named commands
.
/v1/accounts/1/commands
{
"_links": {
"activate": {
"href": "http://localhost:8080/v1/accounts/1/commands/activate"
},
"suspend": {
"href": "http://localhost:8080/v1/accounts/1/commands/suspend"
}
}
}
By attaching the commands to an account resource as a hypermedia link, all consumers will be able to easily lookup the commands that can be executed on the resource. |
After traversing to commands
, we are provided back another set of links that we can continue to follow. We see that we can either activate
or suspend
this account. First, let’s try executing the activate
command. To do this, we make an HTTP GET request to the href
associated with the command.
/v1/accounts/1/commands/activate
{
"createdAt": 1491459939554,
"lastModified": 1491473977565,
"firstName": "Taylor",
"lastName": "Swift",
"email": "tswift@cloud.com",
"status": "ACCOUNT_ACTIVATED",
"_links": {
"commands": {
"href": "http://localhost:8080/v1/accounts/1/commands"
},
"events": {
"href": "http://localhost:8080/v1/accounts/1/events"
}
},
"accountId": 1
}
In the example above, we see the command returned back the account resource with a new value for the status
property. After executing the command, the account’s status transitioned from ACCOUNT_CREATED
to ACCOUNT_ACTIVATED
. Let’s try sending the same activate
command twice in a row and see what happens.
/v1/accounts/1/commands/activate
{
"timestamp": 1491474077084,
"status": 400,
"error": "Bad Request",
"exception": "java.lang.RuntimeException",
"message": "Account already activated",
"path": "/v1/accounts/1/commands/activate"
}
As expected, we’ve received an error. This is because the account we created had already been activated. Which means that we cannot issue the same command twice in a row. Now, what’s interesting about this response is that the validation logic is not coming from within the core Spring Boot application. Instead, we have two stateless functions that are deployed to AWS Lambda. These two functions will act as an event handlers — mutating state based on the current context and command of an account.
Now, let’s try the only other command that is listed on the account resource: suspend
.
/v1/accounts/1/suspend
{
"createdAt": 1491459939554,
"lastModified": 1491474306296,
"firstName": "Taylor",
"lastName": "Swift",
"email": "tswift@cloud.com",
"status": "ACCOUNT_SUSPENDED",
"_links": {
"commands": {
"href": "http://localhost:8080/v1/accounts/1/commands"
},
"events": {
"href": "http://localhost:8080/v1/accounts/1/events"
}
},
"accountId": 1
}
Now we see that the account response successfully transitioned from ACCOUNT_ACTIVATED
to ACCOUNT_SUSPENDED
, without error. This is a fairly trivial example, where we have two different states that can be transitioned to and from without being applied twice in a row.
Imagine the complexity of a domain aggregate with many different states and rules between transitions. Things can get complicated quickly. To simplify the system design, we can start out by modeling these state transitions as a directed graph, called a state machine. |
Functions
Now that we’ve seen the workflow in the account service core for creating and managing accounts, let’s see how the core makes requests to Spring Cloud Function apps deployed to AWS Lambda.
public interface LambdaFunctionService {
@LambdaFunction(functionName="account-activated", logType = LogType.Tail)
Account accountActivated(AccountEvent event);
@LambdaFunction(functionName="account-suspended", logType = LogType.Tail)
Account accountSuspended(AccountEvent event);
}
In the interface above we see two AWS Lambda functions that will handle events for an account, which are triggered by the suspend
and activate
commands.
The Event Log
The goal for each function is to validate the state of the Account
aggregate. This is a simple use case to start out, and as this series continues, we’ll see what more complex service blocks look like. For now, we want our functions to be able to change the status
field on an account. This means that the function will need a history of events that have previously been applied to an Account
aggregate. To be able to see the account’s historical events, we just follow the events
link to fetch the account’s event log.
/v1/accounts/1/events
[
{
"eventId": 1,
"type": "ACCOUNT_ACTIVATED",
"accountId": 1,
"createdAt": 1491459944711,
"lastModified": 1491459944711
},
{
"eventId": 2,
"type": "ACCOUNT_SUSPENDED",
"accountId": 1,
"createdAt": 1491459950342,
"lastModified": 1491459950342
}
]
After retrieving the event log for the account, we see two events that were added after executing the activate
and suspend
commands. Each time a command is executed on an account — and if the state of the aggregate is valid — we will apply one new event and append it to the log.
Since it’s not practical for a Lambda function to callback to retrieve the event log, we’ll go ahead and send it as an "attachment" to the event’s payload. By doing this, we provide the full context on what has previously happened to the account.
The next thing we need to do is to figure out how events are dispatched to Lambda functions. Let’s see how routing is handled from commands that are executed on an account, to events dispatched to functions.
Routing to AWS Lambda
As we saw earlier, the account service core has a controller class named AccountController
. Yet, we only observed the behavior of this component from the perspective of a REST API consumer. In addition to more basic CRUD operations on an account, the AccountController
allows API consumers to execute commands. These commands will then generate events that are handled by a Spring Cloud Function app.
@RequestMapping(path = "/accounts/{id}/commands/activate")
public ResponseEntity activate(@PathVariable Long id) {
return Optional.ofNullable(accountRepository.findOne(id))
.map(a -> eventService
.apply(new AccountEvent(AccountEventType.ACCOUNT_ACTIVATED, id)))
.map(this::getAccountResource)
.map(e -> new ResponseEntity<>(e, HttpStatus.OK))
.orElseThrow(() -> new RuntimeException("The command could not be applied"));
}
Here we see the method body for a command that activates an account. First, we fetch the Account
from the AccountRepository
by its ID. Next we create a new AccountEvent
. We then send the event to the EventService where the apply
method will figure out where to route this event to.
public Account apply(AccountEvent accountEvent) {
Assert.notNull(accountEvent.getAccountId(),
"Account event must contain a valid account id");
// Get the account referenced by the event
Account account = accountRepository.findOne(accountEvent.getAccountId());
Assert.notNull(account, "An account for that ID does not exist");
// Get a history of events for this account
List<AccountEvent> events = accountEventRepository
.findEventsByAccountId(accountEvent.getAccountId());
// Sort the events reverse chronological
events.sort(Comparator.comparing(AccountEvent::getCreatedAt).reversed());
LambdaResponse<Account> result = null;
// Route requests to serverless functions
switch (accountEvent.getType()) {
case ACCOUNT_ACTIVATED:
result = accountCommandService.getActivateAccount()
.apply(withPayload(accountEvent, events, account));
break;
case ACCOUNT_SUSPENDED:
result = accountCommandService.getSuspendAccount()
.apply(withPayload(accountEvent, events, account));
break;
}
// ...
return account;
}
The example snippet above shows how account events are dispatched to AWS Lambda functions. Depending on the AccountEventType
, the AccountCommandService will route the event request to a specific function deployed to AWS Lambda.
Functions
Now that the account service core is ready to start dispatching events to AWS Lambda, it’s time to set up our Spring Cloud Function handlers.
This example contains two Spring Cloud Function projects:
Each of these projects are near identical, for simplicity’s sake. In the next part of this series we will look at consolidating the business logic for state transitions into a single function.
Let’s explore the account-activated
function, assuming that account-suspended
has near to the same source code.
Handler
Each Spring Cloud Function project has a handler that describes the inputs and outputs of a function.
public class Handler extends SpringBootRequestHandler<AccountEvent, Account> {
}
In the example above, not much is going on—but this little class is essential to a Spring Cloud Function application. This class describes how this function should be requested, and what the input and output types are. The only other requirement is that we define a functional bean that implements the business logic of the function.
@SpringBootApplication
public class AccountActivatedFunction {
public static void main(String[] args) {
SpringApplication.run(AccountActivatedFunction.class, args);
}
@Bean
public Function<AccountEvent, Account> function() {
return accountEvent -> {
// Get event log from payload
List<AccountEvent> events = accountEvent.getPayload().getEvents();
// Get account
Account account = accountEvent.getPayload().getAccount();
if(events != null && account != null) {
// Get the most recent event
AccountEvent lastEvent = events.stream().findFirst().orElse(null);
if(lastEvent == null || lastEvent.getType() != ACCOUNT_ACTIVATED) {
account.setStatus(AccountStatus.ACCOUNT_ACTIVATED);
} else {
throw new RuntimeException("Account already activated");
}
} else {
throw new RuntimeException("Payload did not supply account events");
}
return account;
};
}
}
In the example above, we have our Spring Boot application class. This will be our entry point into the function. Here we describe a function bean that each and every event dispatched by the account service core will be handled from.
Now we have a runnable function that we can ship to AWS Lambda. We can even run this function locally for testing purposes. But to invoke the function from the account service core, we’ll need to deploy it to AWS Lambda.
There are some other things that we do need to worry about in the pom.xml , but for now we’ll leave that to some upcoming documentation efforts.
|
Deployment
If you’re familiar with AWS Lambda, you can manually deploy each of the artifacts for the functions using the AWS console. The problem with what I just said is that no one in their right mind would manually deploy artifacts to the cloud, right? To make the DevOps part easy, I’ve created a CI/CD pipeline with a tool named Concourse that will automate the Lambda deployment.
To automate the deployment, we’re going to use CloudFormation, which provides an easy way to deploy changes for a set of components (known as a stack) as one atomic transaction from the AWS CLI. The first thing that is required for CloudFormation is a template that describes what it is we want to deploy.
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Account activated
Resources:
accountActivated:
Type: 'AWS::Serverless::Function'
Properties:
Handler: demo.functions.Handler
Runtime: java8
FunctionName: account-activated
CodeUri: ./account-activated-1.0.0-SNAPSHOT-aws.jar
Description: Implements business logic for activating an account
MemorySize: 1024
Timeout: 30
Role: 'arn:aws:iam::194021864310:role/service-role/public'
Events:
Api1:
Type: Api
Properties:
Path: /accountActivated
Method: ANY
In the snippet above we see a CloudFormation template for deploying the account-activated
function. This template will create a package that is uploaded to an Amazon S3 bucket and then deployed to Lambda.
There’s nothing tremendously exciting about this process. To make this as simple and boring as possible, I’ve created a deploy-function.sh script that will be used by Concourse to automate function deployments.
export AWS_ACCESS_KEY_ID=$aws_access_key_id
export AWS_SECRET_ACCESS_KEY=$aws_secret_access_key
export AWS_DEFAULT_REGION=$aws_default_region
package() {
# Create a CloudFormation package for this AWS Lambda function
echo -e "Packaging deployment..."
aws cloudformation package \
--template-file package.yaml \
--output-template-file deployment.yaml \
--s3-bucket $bucket_name || error_exit "Packaging failed: Invalid S3 bucket..."
deploy
}
deploy() {
# Deploy the CloudFormation package
echo -e "Deploying package from s3://$bucket_name..."
aws -- cloudformation deploy \
--template-file deployment.yaml \
--stack-name $function_name || error_exit "Deployment failed..."
# Remove the deployment package
rm ./deployment.yaml
}
In the snippet above we see the magic of the deploy-function.sh
script. To make sure that this script works, we need to provide the following crucial bits of information.
-
AWS access key ID
-
AWS secret access key
-
AWS default region
-
S3 bucket name
-
Function name
The last and final concern we’ll take care of is the invocation of Lambda functions from a Spring Boot application.
Lambda Invocation
Once our Spring Cloud Function apps have been deployed to AWS Lambda, we can begin invoking them from the account service core. To make this easy, I’ve created a helper starter project that will manage the invocation context to AWS.
This project makes it easy to start invoking AWS Lambda functions from a Spring Boot application. All we have to do is to update the account service core configuration with the AWS IAM credentials that we used to deploy the CloudFormation package. I’ve created a configuration class that will allow you to populate this in the application.yml
of the service core.
account-core
spring:
profiles:
active: development
server:
port: 0
---
spring:
profiles: development
amazon:
aws:
access-key-id: replace
access-key-secret: replace
Now, I’m not a fan of saving sensitive credentials to disk, and neither should you. That’s why Spring Boot supports overriding configuration properties using environment variables.
export AMAZON_AWS_ACCESS_KEY_ID=<replace>
export AMAZON_AWS_ACCESS_KEY_SECRET=<replace>
Now you can run the account service core locally using the following command.
mvn spring-boot:run
The service core will start up—and if the IAM keys were configured correctly—you can start calling your functions from the Spring Boot application. To verify that this is working, try creating a new account and executing the suspend
command.
2017-07-06 18:47:29.027 INFO 64845 --- [uspendAccount-2] demo.function.LambdaFunctionService : accountSuspended log:
START RequestId: 78824b17-62a5-11e7-bd48-e3bbbc0eed75 Version: $LATEST
END RequestId: 78824b17-62a5-11e7-bd48-e3bbbc0eed75
REPORT RequestId: 78824b17-62a5-11e7-bd48-e3bbbc0eed75 Duration: 5.05 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 106 MB
In the snippet above we can see that the Lambda function for suspend-account
was successfully invoked.
The cold start time of a Spring Cloud Function app isn’t exactly ideal. The first time a function is invoked it will take up to 20 seconds to start the app. After the first request, things will run much faster. We’ll cover this more in the next post. |
Summary
In this post we looked at the basics behind a service block architecture. While this post tries to be comprehensive, there are a lot of moving parts that it may have left out. Spring Cloud Function is in its early days, but shows powerful promise of being a great serverless framework.
In the next post, we’ll cover more of the logistics for creating a serverless CI/CD pipeline using Concourse. We’ll also look at how we can use the open source platform Cloud Foundry to inject in service credentials into a Lambda function. This is an important goal because Cloud Foundry provides a portable abstraction that doesn’t lock you into a single cloud provider. Which means that you can use Lambda functions with your own services!
Special thanks
Spring Cloud Function is a very exciting new project in the Spring ecosystem. I would like to give a special thanks to Dr. Dave Syer for helping me out with the examples in this post. There are many others to thank on the Spring Engineering team for this awesome new project, namely Mark Fisher for incubating and driving the project forward. Also, the one and only Mark Paluch who was kind enough to review my usage of Spring Data reactive repositories.
Also, a huge thanks to James Watters for being such a huge supporter, advocate, and driver of Spring. Back in December James tweeted the lone words Spring Cloud Function as kind of a teaser, which initially got me very excited about this project. This blog post took months of research and experimentation, so if you found it useful, please share it.
Until next time!
No comments :
Post a Comment
Be curious, I dare you.