What to avoid with microservices

5 things to avoid in MS

Architectures and approaches normally turn into trends because enough use cases exist to corroborate their genuine usefulness when solving a particular problem or class of problems. In the case of microservices, before they were trendy, enough companies built monoliths beyond their manageability. They had a real problem on their hands, which was a large
an application that fundamentally clashed with the modern ways of scaling, managing, and evolving large systems in the cloud.


Through some trial and error, they reinvented their properties as a loose collection of microservices with independent scalability, lifecycle, and data concerns. The case studies in 1.5, “Case studies and most common architectural patterns” on page 14 are just a small sample of companies successfully running microservices in production. It is important to remember these use cases because the trendiness of microservices
threatens to compel developers to try them out in contexts where they are not meant to be used, resulting in project failures in some cases.

This is bad news for practitioners who derive genuine benefits from such an architecture. The following section identifies where microservices are not a good choice. It helps limit incurring costs of implementing microservices infrastructure and practices when it is warranted. It also helps avoid the microservice hype, and prevent some failures that would sour people to an otherwise sound technical approach.

1. Don’t start with microservices


When beginning new applications, do not demand that microservices be included in them. Microservices attempt to solve problems of scale. When you start, your application is tiny. Even if it is not, it is just you or maybe you and a few more developers. You know it intimately, and can rewrite it over a weekend. The application is small enough that you can easily reason about it.


2. Don’t even think about microservices without DevOps


Microservices cause an explosion of moving parts. It is not a good idea to attempt to implement microservices without serious deployment and monitoring automation. You should be able to push a button and get your app deployed. In fact, you should not even do anything. Committing code should get your app deployed through the commit hooks that trigger the delivery pipelines in at least development. You still need some manual checks and balances for deploying into production.


3. Don’t manage your own infrastructure


Microservices often introduce multiple databases, message brokers, data caches, and similar services that all need to be maintained, clustered, and kept in top shape. It really helps if your first attempt at microservices is free from such concerns. A PaaS, such as IBM Bluemix or Cloud Foundry, enables you to be functional faster and with less headache than with an
infrastructure as a service (IaaS), providing that your microservices are PaaS-friendly.


4. Don’t create too many microservices


Each new microservice uses resources. Cumulative resource usage might outstrip the benefits of the architecture if you exceed the number of microservices that your DevOps organization, process, and tooling can handle. It is better to err on the side of larger services,
and only split when they end up containing parts with conflicting demands for scaling, lifecycle, or data. Making them too small transfers complexity away from the microservices and into the service integration task. Don’t share microservices between systems.

5. Don’t forget to keep an eye on the potential latency issue


Making services too granular or requiring too many dependencies on other microservices can introduce latency. Care should be taken when introducing additional microservices. When decomposing a system into smaller autonomous microservices, we essentially increase the number of calls made across network boundaries for the services to instrumentally handle a request. These calls can be either service to service calls or service
to persistence component calls. Those additional calls can potentially slow down the operating speed of the system. Therefore, running performance tests to identify the sources of any latency in any of those calls is fundamental. Measurement is undoubtedly important so that you know where bottlenecks are. For example, you can use IBM Bluemix Monitoring and Analytics service for this purpose. Beyond that, services should be caching aggressively. If necessary, consider adding concurrency,
particularly around service aggregation.