To Microservice or Monolith, that is the question...
Whether 'tis nobler in the mind to suffer
The pains and troubles of hosting and maintaining,
Or to take arms against a sea of troubles
And by opposing the split of services, end them.
To die - to sleep...
My apologies to Shakespeare.
The constant series of questions I have been asked about if so and so or such and such should be better architected as microservices, or built with multiple layers of front end components hitting backend dohickies hosted within multiple layers of auto-magic hosting "silos" of data and services, shows that truly, there is way too much confusion out there on what constitutes a "good" software development approach.
The real problem we face when looking at using microservices v monoliths is that, in many cases, the people making the recommendations are optimising for the wrong thing.
Microservices were developed in an environment where there was a need to optimise uptime and compute efficiency at the expense of developer time and resources.
So the "too long; didn't read" of this post is that, unless you have development resources spilling over the gunnels, stick with a monolith.
Not convinced? Let me explain why.
Netflix, Google, Facebook, AWS et al, have massive systems that take untold numbers of computers to run them, but they also have untold numbers of developers to maintain them. Spinning off a development team of 8-10 people to build and maintain a microservice is nothing to them.
Microservices were developed as a solution to a very large and complex problem. That problem was, in a nutshell, "how do we get hundreds of developers working effectively on this big sprawling application while keeping uptime high and compute performance efficient?"
The way this was solved was by taking large, highly-coupled applications and splitting them along "natural" lines of responsibility. The billing system, the friends news feed and other systems like this became separate "services". These services had their own data stores, ran independently of each other and gave each other data through clearly defined interfaces.
The engineers who worked there thought this was a pretty cool approach to the problem (it was!). They could now have multiple development teams all working on their own microservice independently of other development teams. More work could be done by these dozens of teams made up of many developers as they could now work in parallel. The services could be deployed independently, allowing for more deploys and features to be pushed within the same period of time. And it could all run more efficiently as the microservices took up less resources per "instance" of them running.
This was all a massive win for these large companies, so they wrote blog posts about it extolling the virtues and amazement of microservices.
The problem was that other people looked at it and applied the following reasoning:
1) Big massive companies use microservices, 2) Big massive companies are successful, 3) I want to be successful too, 4) I SHOULD USE MICROSERVICES FOR MY APP!
This is a classic mistake of "correlation does not equal causation".
Facebook started as a monolith.
These large companies got to a point, through growing their monolithic applications, that speed and performance became real problems. They then, step by step, replaced out key parts of their structure as performance demanded.
But here's the key thing:
They did this once they were big and successful. These massive companies did not start out by building microservices.
Because they were optimising for developer resources in their early stages.
In any company with constrained developer resources, a monolith is going to be a more efficient approach to building your software. If you do have an explosion in growth (and I hope you do) the monolith will do just fine by throwing more expensive computers at it for long enough for you to split out the most inefficient part into it's own service. But, more importantly, because you grew that much, you should be able to afford the developers to do that.
Wasting money on a microservice architecture when a monolith will do, is an inefficient allocation of resources. It's not wrong. It's just inefficient.
Of course, there are cases where microservices from the get-go, may be the thing to do. But I've yet to see one with my own eyes.
If you have a monolith, and you aren't at tens of thousands of users per minute hitting the application, don't get sold on rewriting the whole thing into microservices. Instead, get the monolith more performant. There are lots of performance wins you can achieve quite simply.
Then, once the performance upgrade is done, and you STILL want to go the microservices route, find one function of the monolith that is causing all the performance pain and split that out into a micro service.
But know this; once you split out a microservice, you now have two applications that need operations, maintenance, upgrades, feature development, and can fail.
Once you split out the third, you now have three sets of ops, maintenance, upgrades... and so on.
Even worse, you'll now need to maintain the test suite between these microservices and make sure that, if a developer is changing the functionality of microservice A, that applications B and C will still run in production with that change.
This sort of co-ordination work takes time. Developer time. If you can't afford to add a significant overhead to all future development making sure you have dedicated development teams to each microservice and each developer knows the impact of what changing service A will do to B and C, then microservices may not be for you.
So, to microservice or not to microservice?
If you have to ask: Stick with a monolith.