There's a lot of microservices hate, but there are also terrible balls of yarn. It reminds me that many orgs are not good at engineering.

A few rules that have served me well:

1) architect your software, have a diagram.
2) centralize responsibilities in the diagram.
3) not every box in that diagram needs to be a service, some boxes should be shared libraries
4) conversely, not every box should be a shared library
...

...
5) There are two primary reasons to make a box a service: you need parallelism for a box that one machine cannot do, or the box is used by many other boxes and manages state. (Plz reply if you know other good reasons, my list is incomplete)
6) if you do your work right, making a library into a service and vice versa isn't a huge endeavor.
7) the reason monoliths get a bad rap is because people throw sensible abstraction layers away and get a terrible balls of yarn
...
8) the reason microservices get a bad rap because people think they need to move every function into a service.
9) as an engineer, you get paid to do judgement calls and nuanced trade-offs. If you're at the extreme end of a spectrum, you're exceptional, and odds are not in a good way.
@HalvarFlake 10?) Not everyone is aware about all the conventions and rules of your architectural designs all the time. Be it for new projects or old ones with lots of technical debt, pick something like @archunit, write architecture tests, and run them in your CI - preventing or reducing accidential technical debt, with temporarily accepted "frozen violations" as a metric on your progress.

@HalvarFlake I wrote a blog post last year with a similar stance on this issue: if you're a small team, avoid microservices as long as possible and only use them if you have the requirements AND the infrastructure to support them. Your approach and advice how to split is very good. I will add that to my list of advices.

https://www.inovex.de/de/blog/low-ops-cloud-native-architecture/

Architecture Best-Practices for Low Ops Cloud Native Applications - inovex GmbH

This blog post gives best practices for designing the software architecture of a cloud-native application with little Ops know-how.

inovex GmbH

@hikhvar so interestingly, we were a small team at optimyze and had really good experiences with our own (services) infra. I say "services" instead of microservices because there was a handful of them, and I think they were "right-sized" 🙂
That said: The team was small but if unusually high quality.

Coming into Elastic where the stack had no sensible abstractions and was a weird monolith mess was actually quite a bad shock.

@hikhvar i like the modulith term btw! Good post!!!
@HalvarFlake didn't invented that term. Came across this as well and liked it.
@HalvarFlake I've found observability hugely important with micro services, if it's not possible to track things over multiple microservices I get into a lot of trouble 😵‍💫 very quickly

@HalvarFlake

Regardnig making boxes services, for me the most important part is that you always have to be able to say WHY you did it. It has a cost, so you have to know why are you paying that price.

And no, "being cool", "that's the way we have always done it" and similar. are no valid reasons 😅.

If you don't know "the why", you cannot find out if you are getting what you are paying for.

@HalvarFlake reasons for 5:

Splitting your boxes in a service, helps you to avoid leader election, or limit the leader to a very small part of the code base. In a previous job we had a lot of "stateless data plane services" controlled by few instances of a control plane. Only the control plane required leader election.

A box is a service, because they are managed by different teams with very different release cadences.

@HalvarFlake regarding 5) Think about how the software is about to change, and enable teams to work autonomously in a state of flow allowing to change fast rather than having to ask / coordinate all the time with everyone else (see @TeamTopologies and @suksr); think about where your org needs to innovate, and where it can use commodity / utility products / services - this is going to change over time, and not necessarily only by components becoming commodities (see Wardley Mapping)
@HalvarFlake More on 5) Embrace diversity in your tech stack. Similar to the progress of technologies on your favourite Tech Radar over time, people will bring in new tech that is worth trying out, learning its pros and cons, building skills in your org, using its benefits, and finally sunsetting it one day. To do that, you need boundaries for your services where you can do something different and possibly improve.

@mhartle @HalvarFlake @TeamTopologies @suksr

But don't start every service in a new language/toolset. I think, to successfully manage a MicroServices architecture, you need an common foundation used across services. You need dome standarts to make it easier for your team to mange the services in the long run.
Corollary, rebuild your failed experiments in your default toolstack.

@hikhvar @HalvarFlake Yeah, treating your microservice landscape like some trading card collectible game isn't the way to go 😂
@HalvarFlake
I would say that in general "the box is a service" and "the box needs to keep state" should be distinct cases. Or rather, to the extent that "service"the means "executes non-generic logic", the service bit and the state bit indicates that you should probably have two boxes.
@HalvarFlake
That said, broadly agree. I think there are reasons around release decoupling, mutability, etc. why it can be useful to decouple more, but only in contexts where your developer team isn't mature enough to treat library call boundaries with the same care that they treat service calls. Sadly, this seems to be most teams.
@dymaxion @HalvarFlake my experience is, that developers who doesn't treat libraries with care, don't apply any care to networked service calls and the possible failure scenarios.

@hikhvar @dymaxion @HalvarFlake sure. However it's often easier to rollback one microservice than one library when people are not being disciplined. Usually because rpc layers don't really allow for the expressiveness of a library boundary and rpc systems always have to deal with the old and new versions during the initial rollout.

If you have to rollback the monolith then all engineers that work on it are stalled Vs rolling back one microservice.

@hikhvar
Yes and no. Yes, tightly-coupled teams gonna write tightly-coupled code, but there's more expectation of having to deal with out-of-sync version changes with services.
@HalvarFlake
@HalvarFlake 5/ you need to add a feature that requires an incompatible framework/programming language/a dev team who better not touch your monolith.
@HalvarFlake for (5), "you have several boxes from different vendors" that you would like to integrate. Particularly in combination with your integration implying many- to- many interfaces.
@HalvarFlake and related to your parallelism point, redundancy & resilience.
Regarding 5), I think isolation should also be on the list: failure isolation, privilege isolation, etc.