Tuesday, April 29, 2008

Domain Modelling

Domain modelling is a technique commonly leveraged by solution architects in order to build a conceptual model which describes the problem domain to be solved. A domain model is a static view of the problem domain, which describes the various relevant entities (such as customer, order, etc), their relationships, their major attributes and any relevant actions they perform in the context of the problem domain.

As such, a domain model is a visual representation of the vocabulary of a problem domain. A domain model does not assume that software is to be used as part of the solution. An example domain model is shown below.


The domain to be modelled is not restricted to a specific business problem to be solved. It may in fact simply be an area of the business (such as marketing or sales) we wish to model. In this context, the domain model is a single static viewpoint of an enterprise which visually describes the language used to describe the business from that perspective.

It is interesting to note the significance of perspective in domain modelling. One might think that in the real world a customer is a physical entity with a fixed set of attributes and actions he or she can perform, not open to interpretation. However we need to consider that a customer of one organisation is an employee of another.

As such, the same physical entity may have vastly different representations in different domain models. Perspective is everything, and we need to ensure that the perspective of a given domain model is sufficiently specific so that it does not become confused and overly complex.

In the context of SOA, each service will have its own domain model. We strive for high cohesion within a service domain model by ensuring that the described business area is highly specific, but loosely coupled with business areas supported by other services.

Eric Evans in his classic book Domain Driven Design provides considerable guidance on domain modelling and outlines many of the design patterns we may leverage in designing software systems centred around domain models. It is essential reading for those looking to design and develop custom built services.

It must be noted however that despite its many uses, domain modelling gives us only a static view of the business. We look to process mapping to provide a dynamic view that describes how entities interact over time. But more on that in my next post, so stay tuned!

Monday, April 28, 2008

Pragmatic Legacy Application Rejuvenation

In my last couple of posts, I've focussed on how we go about leveraging existing legacy systems as part of transitioning an organisation to SOA. Just to reiterate, we first identify and define our services based on the business context, and then determine which existing applications fit into each service. We then apply a layer of abstraction between the service boundary and boundaries of the applications held within.

However unfortunately this is not always possible. Legacy applications unfortunately are existing IT assets that act as constraints for our SOA. Despite there being many ways in which we can attempt to expose events from legacy applications, sometimes this is simply not possible due to their design.

In these situations, we must unfortunately compromise on our service definitions. We should however consider the ramifications of this. Compromising on a single service contract can impact the design of many other services. This long term cost should be weighed up against the short term cost of replacing the troublesome legacy application sooner rather than later.

Thursday, April 24, 2008

Publish-Subscribe with Legacy Applications

Graeme posted a couple of questions regarding my recent post on Web services integration. Specifically he was asking how best to get access to events in third party applications that do not support any native mechanism for hooking events that occur within the application.

This is a good question as it is a common problem to overcome. The fact of the matter is that when we are trying to leverage third party applications as part of our SOA, we are constrained by the limitations of those applications. If an application does not provide a native way of getting at events, we have no choice but to extract the events from the application database.

As Graeme points out, this is less than desirable as the database schema of an application is subject to change. The application vendor will likely give you no guarantees about the stability of the database schema, nor warn you when it is about to be updated.

Although this is indeed true, I don't think it is sufficient a problem for us to avoid publish-subscribe. Anytime we upgrade or replace an application with which we integrate as part of a service, we may have to do work to cater for changes in how we integrate with that application. This problem is not limited only to hooking events.

So what techniques can we use to get at events from the database? Well whatever works really. One way we can go about it is to apply some database triggers to write into a log table whenever something changes in the database. We can then poll that log table and publish events based on entries found there.

Another approach is to add a nullable timestamp column which is updated by a trigger every time a row is inserted or updated. We then poll the table for records with a timestamp after the last time we polled.

Some applications make getting at events from the database quite easy as they keep some kind of historical data. So for instance, a CRM application may keep a record of changes to the customer for audit purposes. In this case, we would just poll the audit table for any new records.

The effort required to hook these events in my opinion is certainly worth it. Events in SOA are highly reusable. So the fruits of your labour can be reused across a number of services. It also enables a decentralised data architecture, which considerably reduces coupling between services.

Wednesday, April 23, 2008

Legacy Application Rejuvenation Example

Badis recently asked a good question regarding ways of exposing functions which are implemented by existing applications as part of an SOA:

...I am new to SOA and I work for a group who owns 5 different large web applications. These applications, although, all Java-based, use different technologies. Some are implemented with EJB, others with Spring or home grown frameworks. What is the least-invasive approach to expose these functions as services in your experience.

Before answering this question, there are a number of things that need to be explored. Firstly, I want to just raise a flag regarding terminology here. We do not expose functions within an application as services. A service in the context of SOA is considerably more coarsely grained than a single function within an application. Usually we will find entire applications sitting behind service boundaries.

Before considering how to expose functions in an existing application as part of an SOA, we first need to define our services. Services should be designed from the outside in, not the inside out. That is, we need to understand the service's place in the world before figuring out what goes inside. We want to try and stay clear of a bottom-up Web services integration approach, where services are defined based solely around our existing applications.

Each service should support one or more cohesive business processes. Usually, existing applications do no such thing. They usually embody a number of loosely related business processes that were all brought together to serve a particular purpose at the time the application was built.

Our services need to be identified and defined such that they are aligned with the business. This is the only way we are going to get high cohesion within services and loose coupling between them. Once we have defined our services, then we can then determine to which of these service(s) each of our Web applications belong.

We may find that all five Web applications actually are concerned with the same single service (although this would be unlikely). For example, we may have five different CRM applications for dealing with different types of customers. In this case, all five applications would belong to a single Customer Management service. Although we wouldn't really have an SOA until we added more services. A single tree doesn't make a forest.

When defining our services, we'll very likely be making heavy use of publish-subscribe messaging. In this case, we will update and/or integrate with the existing Web applications not to expose functions as such, but rather to publish events and process event messages when they are received.

We need to evaluate the semantic/functionality gap between our service contracts and the Web application(s) inside each service boundary. Where the gap is very wide and/or we don't want to or can't directly modify the source code of the Web application(s), then we would consider using some kind of EAI tool. Where the gap is very narrow and we are happy to modify the application source code, we can update the application(s) to sit directly on the service bus.

When updating an application directly, which is the situation Badis asked about, we implement a "service interface layer" as part of the application. This layer sits on top of the "application service layer". The application service layer is shared by both the service interface layer and your UI logic (i.e. your controllers in the case of MVC). This allows us to expose the same logic via the UI as well as to other services.

Note that an application service is very different from a service in the context of SOA. I've spoken in the past about how horribly overloaded the term "service" is in IT architecture - this is yet another example.

The service interface layer consists of a number of message handler classes (one for each type of message the application must process) as well as event classes (one for each type of event message the application publishes). This layer would be implemented using the same technology as the rest of the application (EJB/spring/custom).

The application needs some means of receiving messages off the service bus. In essence, the message handlers need to be hosted somewhere. They could be hosted in a Web server (if we are using the HTTP transport), or alternatively you could host your message handlers directly inside an ESB.

In essence, each message handler will be invoked by the infrastructure once the appropriate message is received. The message handler takes over from there.

Monday, April 21, 2008

Service Granularity

Two key outcomes that we aim for in the design of software systems are high cohesion and loose coupling. Cohesion is the degree to which the concerns within a single software component are related. Coupling on the other hand is the measure of interdependency between software components.

We have discussed many strategies for reducing coupling between services in the past. Examples include decentralising your data, using process centric interfaces, using asynchronous messaging, using document-centric messaging, and avoiding command messages. Furthermore, SOA as a style of architecture promotes loose coupling through the use of service contracts that encapsulate the implementation details of a service behind its boundary.

However one of the largest determining factors of coupling is in fact the level of cohesion found within services.

One way of looking at cohesion is how precise a domain model is at expressing the problem domain. If the domain model contains ambiguities such that definitions become confused, then we have low cohesion. Where we start to lose cohesion with a domain model we look at subdividing the model, which Eric Evans makes reference to with his Bounded Context pattern.

We will then likely have the same entity (e.g. a customer entity) represented in both models, but the representations will see the entity from different points of view, optimised for the subdivided domain model. So for example, we may have Sales and Billing domain models - both involving a customer entity, but seeing it from different perspectives. This would result in a different data structure in both models with different business logic, but both pertaining to the same customer entity.

In SOA terms, when we see the concerns of one service becoming ambiguous and confused, we look at subdividing it into two or more services such that the domain models of each service have high cohesion again. If done correctly, we will find that the new services will also have loose coupling between them, as the concerns between services will be only loosely related.

If we subdivide too far, we will find that information models and business logic become duplicated between services as the relative difference in definition of the same entity between domain models becomes increasingly similar.

So there is a delicate balance with regards to service granularity such that we get the right mix of cohesion and coupling.

There is no magic formula for service granularity. It is simply a matter of iteratively refining your architecture until you have sufficient loose coupling and high cohesion. Where you start seeing the need for synchronous request-reply interactions between services, cross service transactions, CRUD interfaces and duplication of logic and data representation between services, you have subdivided too far and must start combining services.

Where you start seeing confused representations of entities in the domain model of your service, you have not subdivided far enough.

Saturday, April 19, 2008

Web Services Integration

As I've recently discussed, SOA and EAI are two fundamentally different concepts guided by different principals and objectives. However, EAI practitioners often will leverage SOA (as a style of architecture) as a means to achieving their EAI objectives.

In this case, EAI practitioners will not interface directly with applications they are attempting to integrate. That is, they will not leverage direct database access, file transfers or RPC APIs in integrating applications. Rather they will leverage only Web service interfaces as the means of integration with all applications.

Where the desired functionality is not exposed by existing Web service interfaces, a service wrapper is applied around the application, which acts as a layer of abstraction between the application implementation, and the desired functionality to be exposed.

This practice is known as Web services integration, or otherwise service oriented integration. This approach offers some improvements over traditional EAI in that all applications speak the same language (SOAP), and that we have looser coupling as a result of encapsulating application internals behind service contracts.

Despite these improvements, Web services integration still suffers from the fact that it is a bottom up approach. That is, we are focussed only on integrating existing applications without any thought going into the design of the applications themselves or how to align our services with the business they support.

As such, just with EAI, we end up with sub-optimal distribution of data and functionality across the applications being integrated. This then results in a lot of chatty synchronous CRUDdy RPC-style interactions between applications, and a lot of heavy lifting to be done by middleware components.

This in the end means we have tighter coupling between services, which impacts our ability to safely and easily make changes and also negatively impacts our overall performance and reliability.

Unfortunately, Web services integration is the vision being pushed by most vendors today, and as such forms a large part of the documented "guidance" on SOA available. Consequently, Web services integration is the most commonly practiced approach to SOA, and is the image most people get in their minds when they think about SOA.

In order to get a better quality SOA, we must align our services with the business, expose process centric interfaces, rely mainly on publish-subscribe messaging and decentralise our data.

Friday, April 18, 2008

AIIA 2008 iAward

Change Corporation has just been awarded the AIIA 2008 iAward for Western Australia in the categories of e-Logistics, Financial Applications and General Applications for the solution we delivered for Fortron Insurance Group.

This project entailed an SOA transformation which involved transforming core business processes as well as the IT systems that support them. Congratulations to our team who have worked tirelessly to make this project such a great success!

Thursday, April 17, 2008

SOA and EAI (continued...)

I was recently speaking to someone heavily involved in EAI projects about the relative advantages of SOA over EAI. I found his initial reaction rather curious. It was his view that SOA was an architectural approach that was applicable only in "green fields" contexts. That is, where services are built from scratch and existing IT assets are simply discarded.

This misconception is understandable. How can applications participate in an SOA if they were not designed to do so? Well this is indeed true to some extent. They certainly cannot participate in an SOA without help. I touched on this briefly when discussing COTS applications in the context of service boundaries.

This view is somewhat compounded by criticism that SOA involves too much theory and not enough pragmatism. This view to some extent derives from reports of SOA projects that got caught up in "analysis paralysis" and never really delivered anything.

EAI practitioners sometimes claim that the EAI camp is all about just getting the job done, while SOA practitioners are purists by nature and get too wound up in theory.

While it is true that some SOA initiatives have ended up this way, it is by no means a problem with the architectural style. Merely a problem with how that style was applied. That is, it is a failure of the SOA practitioners involved, not the principals underlying SOA in general.

So how do we leverage existing IT assets in support of an SOA? We use a service wrapper. A service wrapper is effectively a layer of abstraction between new and existing IT assets that support a service, and the service boundary. We already have a well established approach for achieving this - EAI.

We do not want to suggest that SOA replaces EAI. Not at all. We just want to limit the scope of EAI to the service rather than the enterprise. We do not want to directly connect applications that sit in different services. To do so would violate our service boundaries.

All the existing EAI toolsets and principals are applicable in this context. In fact, many ESBs ship with EAI capabilities - including process orchestration engines and a suite of adapters for plugging into common legacy systems.

A message arrives at the service boundary, and the orchestration engine coordinates the activities of the various systems behind the service boundary appropriately. Orchestrations can also be initiated in response to events which are detected inside systems supporting a service. An orchestration can also send or publish messages back out onto the service bus as necessary.

So EAI is just as important as it ever was. We just limit the scope at which it is applied from the enterprise to the service.

Wednesday, April 16, 2008

SOA and EAI

In my last post, I discussed the difference between an application and a service. As such, we have now set the groundwork for differentiating between SOA and Enterprise Application Integration (EAI).

This is a point of confusion for most customers I speak to about SOA. On the surface, they seem to serve a similar purpose. They both result in the integration of systems across an enterprise in support of business processes such that processes can be automated for the purpose of increasing operational efficiency and bringing down the cost of doing business.

In fact, I have heard many times that SOA is just EAI rebadged. This is not at all the case. The two solution offerings have very different concerns, involve different principals and should have very different objectives.

EAI concerns itself with finding ways and means of connecting existing applications together in an automated way. Without this "glue" connecting our applications, we must rely on human beings to be the point of interaction between applications, which is costly and error prone.

The first thing to note is that EAI is about connecting existing applications. EAI does not address the design of the applications themselves. Many applications were not designed with integration in mind. Integration in this case is an afterthought. As such it is very likely we will have sub-optimal distribution of functionality and data between these applications, making it a challenge to make them all work together in support of business processes that span multiple applications.

The second thing to note is that EAI is concerned with integrating applications (as opposed to services). As I have previously pointed out, applications are an orthogonal concern to services. Applications are designed primarily to meet the needs of a user, rather than support a specific business function.

SOA on the other hand is concerned primarily with the definition and design of services, more so than how it is they should be connected; other than restricting the means of communication between services to messages. We have a strong focus on defining and enforcing service boundaries as a means of controlling coupling between services.

With EAI, we resort to any means necessary in order to connect applications. This includes sharing databases, remote procedure calls, file transfers, as well as messaging. Where messaging is not employed as the means of integration, we have coupling between application implementations. If we upgrade or replace an application, then it is extremely likely we will greatly upset all applications with which the application interacts. If we attempt to consolidate applications, that will have broad reaching effects on IT systems across the organisation.

In order to compensate for sub-optimal distribution of functionality and data across applications, EAI relies very heavily on middleware to coordinate the activities and flow of data between applications. We tend to end up with extensive process orchestrations defined and executed within the middleware.

This is in part due to the fact that most applications today expose data centric rather than process centric integration points. Where no integration points are explicitly defined by the application, we rely on direct database access and manipulation in order to achieve our objective. By definition, this approach is completely data centric. As such, the process aspects of the integration effort fall completely in the domain of the middleware.

SOA can be applied in an organisation as a means of integrating applications, but integration should not be the primary objective. The primary objective should be to achieve loose coupling in order to attain business agility, with integration as a by-product of that effort.

Tuesday, April 15, 2008

Services and Applications

Sometimes in my travels I hear a service described as being a type of application, and sometimes I hear an application described as being a service (in the case where we have an application that exposes service endpoints). In reality however, a service never directly aligns with an application.

Consider a risk assessment application where risk assessors review risk profiles and make a series of rulings resulting in a risk assessment. In this case, the application is the tool leveraged by the risk assessor. The service actually encompasses this tool as well as the risk assessors performing the assessments. This is explained in more detail here.

An application is a piece of software leveraged by a user (via a user interface) to perform a task of value to the user, whereas a service is a coarse grained unit of logic that performs a specific function and communicates with external parties by way of exchanging messages that conform to the service contract.

A service may contain zero or more applications - zero if the service is fully automated. An application may span many services (in the case of a composite application).

As such, in any given architecture services and applications are orthogonal concerns.

Monday, April 14, 2008

SOA Defined

Martin Fowler once referred to SOA as an acronym for Service Oriented Ambiguity - not surprising given the amount of ambiguity in the term's constituent words. I have previously commented on the ambiguity inherent in the term service. Architecture contains just as much ambiguity.

Architecture may refer to the process of designing something (in the sense that someone can be skilled in architecture), the style in which something is designed (for example Victorian architecture) and a specific design (such as the architecture of a building). In the context of SOA, all three definitions apply.

As an architectural style, SOA refers to the style of designing systems that are comprised of interacting coarse grained units of logic called services. Services interact by way of exchanging explicitly defined messages via their endpoints. The messages and endpoints are described by service contracts. Policies determine the requirements incumbent upon a service consumer in order to be capable of interacting with a service.

Any architecture meeting the above description is considered service oriented. This then leads us to our second definition. An SOA may refer to any architecture conforming to the SOA architectural style.

And finally, SOA may refer to the process of producing an architecture consistent with the service oriented style of architecture.

It is important to note that there are good architectures and bad architectures that constitute SOA from the standpoint of architectural style. Just because you have a bunch of services, does not mean you have a good architecture. The design patterns and best practices discussed in this blog thus far provide guidance for delivering a high quality SOA (that is, a high quality architecture in the SOA style). However, they are not prerequisites for an architecture to be service oriented.

Due to the ambiguity inherent in the term SOA, we must take care such that when using the term the context in which its usage is intended is clear.

Sunday, April 13, 2008

Design before Technology

Design patterns for the most part are technology agnostic. They capture a best of breed approach for solving a problem in a particular context. Technology is an enabler of these design patterns. We use technology to implement the solutions the design of which is based on design patterns.

Design patterns evolve must more slowly than technology, and are rarely made obsolete. New design patterns arise to face new design challenges and to better manage complexity, whereas new technology arises to make the implementation of these designs easier such that they can be done at lower cost.

Technology led architecture is a recipe for failure. In this situation we see architects designing solutions around some new technology being pushed by some vendor. Where we tend to see this most is ESB driven architecture. Businesses are told they must have some expensive piece of middleware to make their SOA reality, but then the implementation fails due to a lack of proper planning and design.

This is why it is so vitally important for SOA practitioners to be well versed in the theory and best practice of SOA rather than simply being trained in the use of one or more middleware products. A middleware product in the hands of an architect without proper understanding of the guiding principals of SOA is very dangerous indeed.

The architecture must come first and be based on the business requirements and business context, guided by design patterns. The technology must then be selected to support the architecture. This way we ensure that the selected technology is appropriate for the solution.

Achieving Scalability with Sagas

Udi Dahan has written a very insightful article on InfoQ on using sagas for achieving scalability. You can find it here.

Thursday, April 10, 2008

Surviving the Paradigm Shift

SOA has taken quite a battering over the years. There are a lot of disenchanted businesses out there burned by failed SOA implementations. Despite this battering, SOA has not only survived but still attracts considerable attention from businesses across a broad range of industries.

But how long will this last? Does SOA hang by a thread? How many failed SOA implementations will it take for businesses to simply give up? Perhaps the reason why they have not done so already is that the fundamental business needs that SOA addresses (the need for agility and efficiency) are stronger than ever.

We as SOA practitioners need to examine why so many SOA implementations either fail completely, or fail to meet expectations and address these issues as a matter of priority.

One of the key reasons why we have so many bad architectures based on SOA is that many SOA practitioners in the field do not have a good understanding of the best practices and principals around applying SOA in a business context.

This is perpetuated by the proliferation of bad guidance in the SOA space. Some vendors are in part responsible for this, pushing their own guidance that aligns well with their existing EAI stacks rather than with SOA best practice.

The vendors however do not take full responsibility here. Most SOA practitioners have their background in software development; the problem being that many of the traditional object oriented programming paradigms are anti-patterns in SOA. As such, many of the lessons they have learned are invalid in the SOA arena.

Examples of where we see traditional programming approaches being employed in the application of SOA is in the preference for command messages, synchronous request-reply and the use of RPC interactions. Another example is the tendency to prefer centralised data architectures and CRUD interfaces.

We need SOA practitioners to unlearn what they have learned in order to accommodate the coarse grained nature of services, the need for asynchrony, the fallacies of distributed computing and the fact that services must and will evolve independently as they are under different ownership domains.

Wednesday, April 9, 2008

SOA and Referential Integrity

One of the edicts of SOA is that services shall not share data except by way of exchanging explicitly defined messages. One of the questions that tends to pop up quite often then is how we maintain referential integrity between services.

The question stems from the fact that database systems do not allow us to set up foreign key constraints that cross database instances. Thus, with autonomous services potentially all housing their own independent database instances, how do we ensure that a record in one service will reference a valid record in another?

Well the fact is that we can't. But fortunately for us, we actually don't want to. If we were to establish constraints crossing service boundaries, then we would be creating unwanted coupling between services. Moreover, the database instances would need to communicate directly with each other in order to enforce these constraints which would violate our service boundaries.

As it turns out, we don't have to worry about referential integrity between services when we have a decentralised data architecture.

With a centralised data architecture, we have our services pick up their data from other services using CRUD interfaces. We might have a Customer service housing all our customer data and an Order service housing all our order data.

With this approach, we have the ability to create orders in the Order service against a customer ID that doesn't exist in the Customer service. This of course is a problem.

With a decentralised data architecture however, the Order service will contain basic customer data that it would have picked up through receipt of event messages from the Customer service. This way, if we then attempt to create an order for an invalid customer, the foreign key constraint that we would have established between the Customer and Order table in the Order service database would prevent this from occurring.

Something else to consider is that because different services in an enterprise may fall under different ownership domains, they may have different audit requirements. One department may decide that they do not need to keep records, and as such the service is allowed to delete them.

If we had referential integrity enforced between service database instances, then we would no longer be able to offer this flexibility. The audit requirements of one department would then influence another department which would not be well received.

So in conclusion, the fact that we can't have referential integrity enforced between services is a moot point. We don't want it anyway.

Tuesday, April 8, 2008

When Event Messages are not Ideal

In a recent post, I discussed why we should prefer the use of event messages over command messages. There are however situations where command messages are more appropriate.

Say for instance we have a billing service that has the ability to raise invoices. Let us also say that there in a large number other services in the enterprise supporting business processes that at some point require an invoice to be raised. Let us then assume that the action of raising the invoice is to be exactly the same, or at least remarkably similar in all cases.

Using event messages in this scenario would lead us to having the billing service subscribe to different event messages from a number of different services, all of which resulting in the exact same operation in the billing service. This is is somewhat wasteful as the billing service needs to be aware of and process a large number of event message types.

As the required behaviour of the raise invoice operation will not vary between contexts, the context of the originating process is mostly irrelevant. This makes the raise invoice operation of the billing service highly reusable.

As such, we should strongly consider using a single command message sent by all other services to the billing service to invoke the raise invoice operation.

We need to be careful however that the reuse here is mandated by the business, rather than coincidental. If we find that in the future each of these services start having different needs of the billing service in raising invoices, we will greatly overcomplicate this operation and start needing to update the command message structure and/or billing service implementation to compensate. We start suffering the effects of coupling again.

In the field we tend to find that the context of the originating process influences the behaviour of the target service most of the time. Where it doesn't initially, we tend to find that it does at some point in the future. That is why it is best in general to prefer the use of event messages over command messages.

Monday, April 7, 2008

SOA and Reuse

One of the key benefits often touted by SOA practitioners is reuse. It is usually described in the following way:

A business will establish an asset of a number of services over time. When the need arises to implement some new business process, we simply use a process orchestration engine to wire up our existing services in support of the new business process.

The concept is certainly appealing. The ability to be able to support entirely new business processes so quickly, cheaply and easily certainly would drive a lot of business agility. But is this nirvana actually achievable? As usual, the devil is in the detail.

The kind of reuse described here is the kind that we see in traditional programming. One component exposes a function which many other components can invoke, passing parameters to control the execution of the shared component.

This model is not appropriate for SOA because of the coarse grained nature of services and the need for asynchrony. It is also problematic because the calling component is instructing the shared component what to do. This translates to the use of command messages in SOA, which as previously discussed introduces coupling between our services. It is in fact this coupling that prevents us from reusing the shared service effectively.

Component oriented programming has taught us that the larger the granularity of the shared component, the harder it is to reuse in different contexts. In SOA terms, the different contexts we are talking about are the different services sending the shared service the command messages.

When we have Service A sending Service B a command message, Service B is performing a function in the context of the activities occurring in Service A. If we now reuse Service B by having Service C send Service B the same command (albeit perhaps with different content), Service B is now performing a function in the context of the activities occurring in Service C as well.

The problem here is in the intent of the command message. Services A and C are instructing Service B what to do in support of Service A and C's activities respectively. A change in the needs of Service A or C can potentially lead to the need for a change in the implementation of Service B.

As we add more and more services reusing Service B in this way, Service B will become increasingly confused and complex, attempting to cater for the different needs of all the services reusing it.

Consequently, we need to think about reuse in SOA differently than with traditional programming. With SOA the unit of reuse should be the event, not the function. As our SOA matures, we evolve an event catalogue where each event has business relevance (such as a sale event or policy cancelled event).

We can then add a new process that subscribes to relevant business events and potentially publishes new events. If necessary, services are then updated to respond to new events published by the new process.

Now I can hear many of you saying, with the approach described in the opening paragraph we don't have to update any of our existing services - we just wire them up with middleware to support a new process. Well, as previously discussed that is a false promise as the service operations you are attempting to reuse will very likely need to change in the context of different processes leveraging it. So it is likely they will need to be updated anyway.

Furthermore, if a business process supported by an existing service needs to be updated to respond to a new event, then it is completely reasonable that we update the service implementation. That is where the change has taken place! By wiring up existing services with process orchestrations in the middleware you are effectively leaking your service logic outside your service boundary. I'll discuss this in more detail in a future post.

The reason why the event driven approach does not suffer from the issues of the command message approach is that the service publishing the event does not make any assumptions as to which services(s) are subscribed or how they will behave in response to the event.

In conclusion, SOA does indeed deliver reuse in that any given business logic is implemented in one and only one service. Services are written once and then they evolve over time. They are not necessarily static pieces of functionality that are then composed to deliver new value. So although reuse is a key benefit of SOA, the kind of reuse often touted by SOA practitioners is often not achievable.

Saturday, April 5, 2008

Mandated vs. Coincidental Reuse

Developers and architects alike often strive to achieve the holy grail of reuse in the systems they design. It makes sense - reuse is a worthy goal. It means we get repeated value from the software we implement. It also means the total volume of software is reduced, reducing the amount of software that needs to be maintained. This means less effort for developers and lower costs to the people footing the bill.

But reuse must be employed with caution. Often in my travels I have seen reuse become a goal unto itself, rather than designing a system that is aligned with the problem domain. Developers often see patterns where the same logic appears multiple times and aim to extract that logic out into a common place to be reused, despite there being no mandate in the context of the business that the logic in fact be the same for all time.

If it is not mandated in the context of the business that the logic be the same, then we very likely will end up shooting ourselves in the foot by reusing it.

Business requirements are fluid. This is why loose coupling is so crucial in software systems. It allows us to better cater for change. Once a system has been deployed, it must then evolve to support the changing needs of the business. We need to ask whether the business may require that logic duplicated in multiple areas be allowed to evolve independently. If so, the logic should not be reused.

Consider an insurance system that manages policies within number of different product families. The logic that is in common between the product families at the time the system is first developed is not necessarily the logic that will be in common in the future - especially if the business expects to be able to evolve the rules of these product families independently. We must identify which rules very likely will be in common for all time and reuse only the logic corresponding to those rules.

Note that this guidance applies not only to the reuse of business logic, but also the reuse of data representation. This is one of the reasons why we do not allow services to share data directly. The representation of a customer in one service must be allowed to evolve independently to the representation in another. If it is mandated by the business that the representations be the same, then it is likely that the two services in fact should be one.

Maintainability is the ultimate dimension of software quality to which we as architects or developers must aspire. Where reuse improves maintainability, then it is appropriate. However when it hurts maintainability, even though it means we must write the same logic multiple times, it should be avoided.

.NET Community of Practice Session Slides

Thank you to all those who attended the .NET Community of Practice session on Thursday evening. For those who missed it, you can download the presentation slides here.

Wednesday, April 2, 2008

Avoid Command Messages

According to Hohpe and Wolfe in their book Enterprise Integration Patterns, they state that there are three basic types of messages at a semantic level - command messages, document messages, and event messages.

When discriminating between these message types, we are concerned with the semantics of the message name. So for example, a CancelPolicyRequest message is a command message, as it is instructing the service to cancel a policy. A Policy message is a document message, as it contains information about a business document without any context as to what should be done when it is received. And a PolicyCancelledNotification message is an event message as it informs the receiving service that a policy was cancelled, but does not specify what action the receiving service should take in response to the event.

Although command messages do not constitute an RPC interaction, they introduce coupling between services where the other two message types do not. When Service A sends a command message to Service B, Service A is making a decision on behalf of Service B as to what Service B should do in the context of Service A's activities.

The fact that Service A is instructing Service B what to do means that Service A determines what shall be done, whereas Service B decides how it shall be done. This is a subtle but important form of coupling.

Because Service B is performing an operation for Service A, as Service A evolves we may find that Service B is no longer able to meet the needs of Service A. Now we must update Service B due to a change in the needs of Service A. This is the essence of coupling.

The root of the problem here is that Service B's behaviour is being governed by Service A. Service A is making an assumption regarding Service B's behaviour, introducing a dependency.

Moreover, Service B cannot make up its own mind what to do in response to Service A's activities, it must be instructed by Service A.

Consider the case where Service C now must perform some action in the context of Service A's activities. We now need to update Service A to send a command message to Service C as well. Imagine how the complexity here can grow when we have a large number of services!

The solution is to use message types that do not involve instructing a service how to behave. This leaves us with the document and event message types. The document message type tends to appear mostly with the REST style of architecture (which I will discuss in a future post).

With SOA, the preference is to use event messages. In the previous example, Service A would publish an event (such as InvoicePastDueNotification) to which Service B would be subscribed. Upon receipt of the notification, Service B would then make the decision locally as to how to respond to this event; in this case, cancelling the policy. Service B would then very likely publish a PolicyCancelledNotification message, in case other services needed to respond in some way to this event. If the need arose in the future for Service B to respond differently, this would involve only a change in Service B.

If Service C then needed to perform some action (say claw back commissions) in response to the InvoicePastDueNotification message published by Service A, we would simply need to subscribe Service C to the relevant event topic, and then update Service C to behave as needed. Again we have not needed to make a change in Service A.

Here we can see a business workflow occurring that spans multiple services, where the decisions as to how each service contributes to the process are handled locally within each service.

As always however there is a catch. What if in order for Service B to decide how to respond to the event published by Service A, there is insufficient data in the event message? This would mean we would need to update Service A to include more data in the event message - once again we have coupling.

As such, we need to be careful when designing our event messages such that all relevant information regarding the event is included in the message. The needs of the subscribers are not known in advance, so we cannot just include the minimum information in the event message to satisfy the existing intended subscribers.

So in conclusion, prefer the use of event messages over command messages at the service boundary where possible; but take care in the design of your event messages to make sure all relevant information is included.

Tuesday, April 1, 2008

Reminder: .NET Community of Practice Session

This is just a reminder to join me this Thursday evening as I present to the Perth .NET Community of Practice on SOA design patterns and best practice. The session will cut through much of the hype surrounding SOA and deliver some clear and practical guidance on how to design and build services on the Microsoft platform. Details below:

DATE: April 3, 5:30pm
VENUE: Excom, Level 2, 23 Barrack Street, Perth
COST: Free. All Welcome

What is a Topic?

Publish-subscribe is an asynchronous messaging paradigm, where messages are addressed to topics, rather than specific recipients. All subscribers subscribed to a given topic then receive a copy of any event message published on that topic.

It is common for publish-subscribe infrastructure to involve published messages being sent to an intermediary broker. The broker manages the subscriptions and routing of messages to the appropriate subscribers. Such a broker would normally queue published messages for later delivery where subscribers are unavailable.

In other implementations, the message routing function is pushed out directly to the end points thus eliminating the need for the broker. In these cases, multiple copies of the message are sent directly from the publisher to each of the subscribers. This can be optimised at the network layer by associating topics with IP multicast groups so only a single copy of a published message is sent over the wire.

As explained in my post on defining an endpoint, the endpoints of a service are the endpoints of the channels controlled or owned by that service. In the case where an asynchronous queued transport such as MSMQ is employed, the endpoint will have an associated physical queue associated with it from which the service reads the messages addressed to that endpoint. The endpoint address shares the address of the queue. Moreover, the queue is owned by the endpoint. That is, it should not be shared between endpoints, and certainly not between services.

In the case of a publish-subscribe endpoint, the endpoint has an associated topic onto which the service publishes messages. The topic is owned by the service endpoint, and the endpoint address in this case is the address of the topic. As was the case with a queue, a topic should not be shared between endpoints, and certainly not between services.

In some publish/subscribe implementations, topics can be arranged in a hierarchy. Subscriptions can then be performed against a topic wildcard, rather than being limited to only one topic. This means a subscriber can receive notifications published to a topic as well as any topic underneath it in the hierarchy.

For instance we may have the following topics:



A subscriber that subscribes to org.sales.* will be sent all messages published to org.sales and all of its child topics, whereas a subscription to org.sales will only result in receipt of messages published specifically on that topic.