I tend to find during my travels that there are 2 main approaches to SOA design. In one corner we have the people who favour a decentralised data architecture, and in the other corner, we have the people who favour a centralised data architecture.
In the decentralised corner, we have data redundancy between the various services. For instance, customer data may exist in some form or another in every service in your enterprise. When a change occurs in the customer information in one service, an event is published and the services interested in that change receive a notification. They then in turn update their local representations.
In the centralised corner, there is only one centralised source of any given piece of information. Any service that needs that information at any time makes a request to that service to retrieve the information.
The centralised approach has the following disadvantages:
- It produces far more chatty message exchanges because every service needs to go to other services to pick up the data needed to service a given request.
- Data must be updated transactionally across multiple services. We never want transactions to span services as it adds too much coupling. Records may be locked in one service for extended periods of time whilst waiting for other services.
- Synchronous request/reply (which is what is used to pick up the data from the other services) is really slow. You can chew up a lot of threads waiting for responses from other services.
- If one of these services needs to rebooted, every service that depends on it for retrieving data will keel over.
- One data representation must service the needs of all systems in the enterprise, which considerably increases complexity.
- The risk of changing that representation could impact all systems in the enterprise, thus meaning considerably greater testing efforts.
Due to these shortcomings, and the success I have had with the decentralised approach, I sit squarely in the decentralised corner. However, I still tend to find that most people I come across sit in the centralised corner. Are there some advantages to this approach I am missing? Do we need more readily available guidance in this space?
8 comments:
Of course there is Bill! the centralised approach is just plain easier to design, implement and most importantly, think about.
As a result, it can be delivered quicker. yessss, it can!
Both approaches have their merits, but to say one is better than another would be misleading.
Does the customer really care about and more importantly is prepared to pay for, the advantages that decentralisation offers? Or is it just that we technofiles know the downsides and WANT to do use a particular pattern?
One reason software developments can flounder is due, in part, to IT being allowed to 'drive' the development, resulting in overly complex designs, rather than the business being the driver. It all boils down to good engineering, fit for purpose, as opposed to over-engineering.
So I put to you that it is not a case of choosing your SOA pattern based on merits and deciding where you sit squarely, but rather on which solution meets the requirements in the simplest fashion for the best and most expedient outcome.
Is that abrasive enough?
Please see my follow up post for my comments.
Hi Bill, I have a quick question. I am still getting my head around what is true SOA so please bear with me!
If the Data is De-centralised and the Services have events to each other relating to that data. If an event fails for one reason or another doesn't that leave the data in one of the services out of sync?
Thanks
Dirk
When using transactional services, messages (either command or event msssages) are read off the queue and then processed as part of a single distributed transaction.
Any messages then sent or published onto the service bus as part of processing that message are likewise part of that same distributed transaction.
This means that if a failure occurs whilst publishing an event, the received message is dropped back onto the queue to be processed again at a later time.
If the message fails to be processed a number of times in a row, then that message is considered a "poison message" and as such is placed on a failure queue.
Someone must then manually address this failure. This may involve someone simply dropping the message back onto the request queue from the failure queue if the issue causing the failure has been corrected.
I will be doing a post in the future on poison messages, what kind of situations can cause them and how best to deal with them.
Thanks Bill,
So you would use some kind of reliable messaging to gurantee that the message would reach its destination.
But that would still presumably leave a sync issue while the message was re-tried?
Dirk
We usually use durable (reliable) messaging for publish-subscribe. We might not use it in situations where there isn't much issue if a message doesn't get to its destination.
For example, if we were publishing stock prices as they changed, it wouldn't really matter if a message got lost because there'd be another one along shortly with a new price.
With regards to your question about the sync issue, there will always be a certain amount of "information entropy" between your services as events propagate between them.
This is quite acceptable (and somewhat advantageous) when we have a process centric (rather than data centric) view of the world. This is discussed a bit more here and here.
Hi Bill, first of all thanks for a great post. On the risk of changing a data representation, Can't this be decoupled through the service contract interface? On the other hand if the service interface itself changes, shouldn't it be such that the dependent systems also will require this change. I am a bit confused here. Can you please provide some examples?
A decentralized data storage approach can also be implemented by using an MDM solution. The MDM is the centalized source and master of information. However, local (partial) copies can exist with every application/service. This reduces latency and availability issues. The MDM system (and yes, it's middleware again!) would take care of copying local changes back into the master again. It would also offer controls for adjusting the trade-offs involved (CAP theorem - consistency, availability, partitioning).
Post a Comment