Tuesday, May 20, 2008

Layered Service Models are Bad (continued...)

In my previous post I introduced a couple of example typical business processes for an insurance company and illustrated how they translated to a service model based on process, task and entity services. After listing a number of shortcomings of this model, I promised that I would suggest an alternative service model that didn't suffer from the outlined issues.

As I previously mentioned, services should be centred around cohesive business areas/capabilities. We also wish to decentralise our data so that each service has the data it needs locally to service each request without retrieving data from other services. Furthermore, we want to rely mainly on publish-subscribe messaging in order to avoid the coupling introduced by command messages.

So our first order of business is to identify our cohesive business capabilities upon which we will base our services. Through analysis of the originally described business processes, there appears to be the following five cohesive business areas at play - Premium Quotation, Policy Administration, Commission Management, Billing and Customer Management.

Creating a service around each of these business areas gives us the following service model.


Firstly note the simplicity of this model over the model outlined in my previous post. There are only five services, and the dependencies between the services are limited to communication over three topics.

There is no need for any entity services as each service depicted in the above model houses its own data locally. Consequently we have no need for cross-service transactions.

The business logic previously contained in task and process services relevant to a given business area is now fully contained in the corresponding service. As a result, we have far improved encapsulation. Our data is protected from direct manipulation by other services.

The Policy Creation and Policy Reinstatement processes are now mapped to the above service model as illustrated below.

Policy Creation

Policy Reinstatement

Note also with this service model that we have no synchronous request-reply communication between services whatsoever. This results in a considerably more robust, performant and loosely coupled architecture. The high cohesion found within each service further contributes to the loose coupling.

We have coarser grained, business relevant messages exchanged between services and there are considerably fewer message types to deal with. And to top it all off, this service model will take less effort to implement and considerably less effort to adapt and maintain.

10 comments:

AndyHitchman said...

I note that you are retrieving the broker in several places in different services. Without entity services and with distributed data, where is the data sourced and how is consistency maintained?

Andy.

Bill said...

Hi Andy,

Each service holds the data it needs to perform its function locally (in its own local database) in the form/structure that best suits the domain it services.

So one of the services would be deemed responsible for managing broker information.

None of the services in the given service model would really be appropriate for that. The example service model was not intended to be a complete service model in that it detailed only the services necessary to handle the two example business processes.

There would be a great number of additional business processes that would ordinarily need to be handled in an insurance business. One such process would be broker management.

Depending on how these business processes were structured and interacted with other business processes, we might consider having a Broker Management Service.

Alternatively, there may be other participants in the sales channel so we might consider having a Channel Management Service.

However, we might find that managing the channel/brokers was something primarily done by the sales function, so we might have a Sales service to handle the management of brokers (in addition to other responsibilities).

Whatever service was responsible for supporting the broker management process, that service would publish events whenever a broker was updated.

Other services (such as the Commission Management Service and Premium Quotation Service) that relied on broker information would be subscribed to that event.

When these subscribed services received such a notification, they would update their own internal broker representations.

These notifications are sent over the service bus using durable messaging, which means that we don't run the risk of the notification being lost.

As such, we can make certain guarantees about our data being synchronised between our various services that share that data (albeit in different forms).

Bill

Russ said...

Hi Bill,

How do you keep the data in synch when another service is developed after the service responsible for the business data has been in production (ie. it has missed the business data events)?

Do we need to provide an pull type of service operation to handle this situation? What happens if some of the data needs to be processed as part of the service business logic, does it make sense to create a separate topic and publish all of the business data to this topic (and to ensure that the other services do not act on the event again)?

Kind Regards,
Russ.

Bill said...

Hi Russ,

Good question! This situation occurs quite frequently, and not just when engaging in SOA leveraging publish-subscribe.

For example if you acquire a new CRM package, that package is installed with a clean database. You need to populate all your existing contacts and other related information into the CRM database.

This is achieved by a data import operation.

The same applies in this case. Your new service has missed out on all the existing business events, so it must have a baseline to begin from that synchronises it with all your other existing services.

You want to extract the relevant data from your existing services.

There are many ways this can be achieved. The simplest way is to shut everthing down, migrate the data, create the new event subscriptions and then power everything back up again.

If this is impractical for your environment, then there are alternative strategies I'd be happy to discuss with you. The alternatives you have available to you will depend on your environment.

Regards,
Bill

Russ said...

Hi Bill,

Would Layered services be better in the following scenario?

Let’s assume there are two different models in an organisation in how you write policy. The second is a new business opportunity that has occurred after your Policy Administration service has been created.

Both models use the same Reinstate Policy.

If you were using the layered approach you could simply create another Process Service for NewWay Policy Create Service/Write Policy Service and reuse the reinstate policy service. You would also leverage the Policy Entity service so all would hopefully work nicely :)

However if we are to Modify the Policy Administration Service we would have to abide by the existing interfaces and rewrite internals of the service. You then have versioning issues, management issues etc. You definitely would not want to create a new Policy Administration Service as you have then blown all of the existing work you have done.

A common scenario that I have is that the business process changes slightly from business unit to business unit. Some have legitimate cause for these differences (for example lower value transactions v's higher value) will result in additional processes around the core tasks?

My ultimate question is how can you extend the Policy Administration Service for different uses without having to change the service implementation?

Kind Regards,
Russ.

Bill said...

Hi Russ,

Probably the first thing to note is that there is an underlying assumption with the layered service model approach that services will be highly reusable so that they can be rewired in support of new unexpected processes.

More often than not however this proves not to be the case. The larger the granularity of a service, the less reusable it becomes due to small differences in functionality required by its consumers.

So although we might say we'll be able to reuse the Reinstate Policy service, we'll very likely find that we'll need to make some subtle changes to it in order to support new process services that leverage it.

One of the reasons for this is the use of command messages. Because the sending service is instructing the receiving service to do something on its behalf, this is an inherent form of coupling.

The receiving service will struggle under the burdon of all the slightly different needs of all its consumers.

Another disadvantage of the layered service model approach is that it is difficult to get reuse between process services.

So in your example, the two Write Policy processes are quite similar. If we add a new process service for the second variant, we get no reuse between those process definitions.

If in the future we want to update the core Write Policy process, we'd need to update a number of variations.

As such, there might be an argument for updating the existing Write Policy process service to cater for the variation - which means we are updating an existing service implementation anyway.

Note also that the Policy Administration service (in the self-contained process-centric model) could be implemented using a workflow engine if desired.

So you could model the Write Policy process in the workflow engine. Creating the New Write Policy process would be a matter of adding a new workflow.

You would then need to figure out under what circumstances that new process would run. If it runs on receipt of a notification, then we don't need to update our service contract because notification messages are defined in the publishing service's contract, not the subscribers' contracts.

If the new process runs on receipt of a command message, then this may be upon receipt of an existing command message - again we don't need to change our service contract.

But even if we do need to update our service contract, these things happen often enough. This is why we need service versioning policies.

So in conclusion, yes - you would need to update the Policy Administration service implementation.

However, that's never posed a problem for me in the past. And even using the layered service model approach, you'll very likely need to update service implementations as you add new processes.

Bill

tri said...

Hi Bill,

really enjoyed reading your post on layered service modelling and your general thoughts. It's now a while that you published this post - hope you're still getting the comment!

I find the model you're describing very attractive - large grain service modelled along business areas; purely asynchronous reliable communication; no centralized data; no cross-service transactions.

In the past I've worked as architect on an eGovernment project that followed just these same architectural principles. That was by design and also by necessity -- since the underlying product (SonicESB) naturally guided developers into modelling services as entities with a purely message-oriented interface (of course, that was putting the horse before the cart). Services in SonicESB aren't really services in the conventional sense -- they are written against an API and can communicate with other Sonic services only using that API. The API binds onto a JMS abstraction layer that exercises the underlying MOM implementation. However, some queues that drive services may be exposed via Web service interfaces to the external world. Sonic also offers a light-weight process construct: itinerary-based routing includes a routing slip with every message and describes which queues/services must be traversed in what order.

The projet team really bought into the benefits of temporal decoupling and came to see it as godsend -- they also came to accept the unusual asynchronous communication style that goes with it.
* individual service may be taken down for maintenance without negatively affecting the rest of the system -- messages simply get buffered until the service comes up again
* as a corollary to the above: services do not need to handle peak loads -- just the average load. When bursts of messages arrive they are simply backlogged in the queue until the service can process them.
* non-blocking for the message producer: fire-and-forget with reliable messages; producers "know" that messages eventually arrive - SLA of the recipient gurantees that this happens within a given timeframe
* monitoring of queues: analyze queue utilization over time; detect usage patterns and adapt to and/or predict load situations;
* inspection of in-flight messages: analysis of messages; remove and store individual messages; replay them at a later stage (or for testing); isolate poison messages and use them for failure analysis

While this particular system was built on top of Sonic, the general principles can be carried over onto any technology with more or less pain. I'm curious: what underlying message broker and communication (JMS, AMQP, XMPP, WS-Notification) do you use for reliable asynchronous messaging? how do you bind your service? and most importantly: how do you describe the schema of your messages?

Richard Veryard said...

You have shown problems with one particular layered service model. This doesn't mean that all layered service models are bad, does it?

As you point out, one reason commonly cited in support of the layered service model approach is the hope that services will be highly reusable. But there is a much more important reason for layering - based on the expectation that each layer has a different characteristic rate of change. When done properly, layering should make things more flexible. When done badly (as in your example and, I'm sorry to say, in the writings of a lot of self-appointed SOA experts) then layering can have the opposite effect. See my post on Layering Principles

Tu said...

Bill,

In your solution to the layered approach, you haven't mentioned how transactions across services would be handled. Even the services in your scenario are asynchronous, if the invoice creation step fails for whatever reason, then how would you roll back the commission that was created in the Commission service? Wouldn't the same problem exist as in the layered approach?

-Tu

Anonymous said...

And zero re-usability. Way to go Bill