What we do (physics simulation software) doesn’t need all the complexity (in my option as a long time software developer & tester) and software engineering knowledge that splitting stuff into micro services require.
Only have as much complexity as you absolutely need, the old saying “Keep it simple, stupid” still has a lot of truth.
But the path is set, so I’ll just do my best as an individual contributor for the company and the clients who I work with.
If not, do the monolith thing as long as you can.
But if you're processing jobs that need hand off to a GPU, just carve out a service for it. Stop lamenting over microservices.
If you've got 100+ engineers and different teams own different things, try microservices. Otherwise, maybe keep doing the monolith.
If your microservice is as thin as leftpad.js and hosts only one RPC call, maybe don't do that. But if you need to carve out a thumbnailing service or authC/authZ service, that's a good boundary.
There is no "one size fits all" prescription here.
>> Microservices make sense in very specific scenarios where distinct business capabilities need independent scaling and deployment. For example, payment processing (security-critical, rarely updated) differs fundamentally from recommendation engine (memory-intensive, constantly A/B tested). These components have different scaling patterns, deployment cycles, and risk profiles, which justify separate services.
Then, after they became popular, people got carried away with the "micro" bit, and "microservices" started getting rejected because the associated practice had skewed in the opposite direction that had caused "SOA" to be rejected.
I guess the next iteration needs to be "goldilocks services".
The other key difference between microservices and other architectures is that each microservice should do its primary function (temporarily) without hard dependencies, which basically means having a copy of the data that's needed. Service Oriented Architecture doesn't have this as one of its properties which is why I think of it as a mildly distributed monolith. "Distributed monolith" is the worst thing you could call a set of microservices--all the pain without the gains.
Google played a role in popularizing the microservice approach.
When I was at Google, a microservice would often be worked on with teams of 10-30 people and take a few years to implement. A small team of 4-5 people could get a service started, but it would often take additional headcount to productionize the service and go to market.
I have a feeling people overestimate how small microservices are and underestimate how big monorepos are. About 9 times out of ten when I see something called a monorepo it's for a single project as opposed to a repo that spans multiple projects. I think the same is true of microservices. Many things that Amazon or Google considers microservices might be considered monoliths by the outside world.
For example, the API server only reads and writes resources to etcd. A separate microservice called the scheduler does the actual assignment of pods to nodes by watching for changes in the resource store against available nodes. And yet a different microservice that lives on each node accepts the assignment and boots up (or shuts down) pods assigned to its node. It is called the kublet. The API server does none of that.
You can run the kublet all on its own, or even replace it to change part of the architecture. Someone was building a kublet that uses systemd instead of docker, and Fly.io (who seems to hate kubernetes) wrote a kublet that could stand things up using their edge infrastructure.
The API server also does some validations, but it also allows for other microservices to insert itself into the validation chain through pod admission webhooks.
Other examples: deployment controllers, replicaset controllers, horizontal pod autoscalers, and cluster autoscalers to work independently of each other yet coordinated together to respond to changing circumstances. Operators are microservices that manage a specific application component, such as redis, rabbitmq, Postgresql, tailscale, etc.
One of the big benefits of this is that Kubernetes become very extensible. Third-party vendors can write custom microservices to work with their platform (for example, storage interfaces for GCP, AWS, Azure, or Ceph, etc). An organization implementing Kubernetes can tailor it to fit their needs, whether it is something minimal or something operating in highly regulated markets.
Ironically, Kubernetes is typically seen and understood by many to be a monolith. Kubernetes, and the domain it was designed to solve is complex, but incorrectly understanding Kubernetes as a monolith creates a lot of confusion for people working with it.
You are free to do that, but that's a very specific take on microservices that is at odds with the wider industry. As I said above, what I was describing is what Google referred to internally as microservices. Microservices are not smaller than that as a matter of definition, but you can choose to make them extra tiny if you wish to.
If you look at what others say about microservices, it's consistent with what I'm saying.
For example, Wikipedia gives as a dichotomy: "Service-oriented architecture can be implemented with web services or Microservices." By that definition every service based architecture that isn't built on web services is built on microservices.
Google Cloud lists some examples:
> Many e-commerce platforms use microservices to manage different aspects of their operations, such as product catalog, shopping cart, order processing, and customer accounts.
Each of these microservices is a heavy lift. It takes a full team to implement a shopping cart correctly, or customer accounts. In fact each of these has multiple businesses offering SaaS solutions for that particular problem. What I hear you saying is that if your team were, for example, working on a shopping cart, they might break the shopping cart into smaller services. That's okay, but that's not in any way required by the definition of microservices.
Azure says https://learn.microsoft.com/en-us/azure/architecture/guide/a...
> Model services around the business domain. Use DDD to identify bounded contexts and define clear service boundaries. Avoid creating overly granular services, which can increase complexity and reduce performance.
Azure also has a guide for determining microservice boundary where again you'd need a full team to build microservices of this size https://learn.microsoft.com/en-us/azure/architecture/microse...
And by that, I mean that I have at times seen and/or perhaps even personally used as a cudgel - "This thing has a specific contract and it is implicitly separate and it forces people to remember that if their change needs to touch other parts well then they have to communicate it". In the real world sometimes you need to partition software enough that engineers don't get too far out of the boundaries one way or another (i.e. changes inadvertently breaking something else because they were not focused enough)
there are of course microservices for things like news feed etc, but iirc all of fb.com and mobile app graphql is from the monolith by default.
The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.
A single monolith can be deployed in different ways to handle different scalability requirements. For example, a distinct set of pods responding to endpoints for reports, another set for just websocket connections, and the remaining ones for the rest of the endpoints. Those can be independently scaled but released on the same cadence.
There was a long form article I once read that reasoned through this. Given M number of code sources, there are N number of deployables. It is the delivery system’s job to transform M -> N. M is based on how the engineering team(s) work on code, whether that is a monorepo, multiple repos, shared libraries, etc. N is what makes sense operationally . By making it the delivery system’s job to transform M -> N, then you can decouple M and N. I don’t remember the title of that article anymore. (Maybe someone on the internet remembers).
This ain't new. Any language supporting loading modules can give you the organization benefit of microservices (if you consider it a benefit that is - very few orgs actually benefit from the separation) while operating like a monolith. Java could do it 20+ years ago, just upload your .WAR files to an application server.
Erlang could do it almost 40 years ago.
It can be used to upgrade applications at runtime without stopping the service. That works well in Erlang, it’s designed from the ground up for it. I know of a few places that used that feature.
It might see the light of day at some point in the future, but if the past is anything to go by...
Whatsapp is implemented with Erlang.
It is a more robust platform for agentic AI, and I’d certainly start with a BEAM language for agentic AI.
Small teams, big results is a characteristic that I’m very interested in, in our post-ZIRP reality.
But they're the only real case studies
If I were to say "Go", people can point to big projects like Docker, Kubernetes, etcd, Googles internal use, and a few others (Uber?)
Erlang just doesn't have that sort of buy in, which is concerning because it's been around longer than Go (as a FOSS language), heck it's been around longer than Python (but it was proprietary back then)
Speaking as someone that's never used it, that's got "don't bother unless you've got an academic interest in it" written all over it
So it remains a “secret” weapon and I am fine with that. Not everything have to be validated by popularity in order to be unreasonably effective.
Besides: Erlang predates Java.
NGL It was clever enough that every few years I think about trying to redo the concept in another language....
Although it would be neat to implement some of the benefits of a service mesh for BEAM — for example, consistently applying network retry/circuit breaker policies, or dynamically scalable genservers.
Most companies should be perfectly fine with a service oriented architecture. When you need microservices, you have made it. That's a sign of a very high level of activity from your users, it's a sign that your product has been successful.
Don't celebrate before you have cause to do so. Keep it simple, stupid.
I think the main reason microservices were called “microservices” and not “service-oriented architecture” is that they were an attempt to revive the original SOA concept when “service-oriented architecture” as a name was still tainted by association to a perceived association with XML and the WS-* series of standard (and, ironically, often systems that supported some subset of those standards for interaction despite not really applying the concepts of the architectural style.)
>> For most systems, well-structured modular monoliths (for most common applications, including startups) or SOA (enterprises) deliver comparable scalability and resilience as microservices, without the distributed complexity tax. Alternatively, you may also consider well-sized services (macroservices, or what Gartner proposed as miniservices) instead of tons of microservices.
I've seen a few regrettable things at one job where they'd ended up shipping a microservice-y design but without much thought about service interfaces. One small example: team A owns a service that runs as an overnight job making customer specific recommendations that get written to a database, and then team B owns a service that surfaces these recommendations as a customer-facing app feature and directly reads from that database. It probably ended up that way as team A had the data scientists and team B had the app backend engineers for that feature and they had to ship something and no architect or senior engineer put their foot down about interfaces.
That'd be pretty reasonable design if team A and team B were the same team, so they could regard the database as internal with no way to access it without going through a service with a well defined interface. Failing that, it's hard to evolve the schema of the data model in the DB without a well defined interface you can use to decouple implementation changes from consumers and where the consuming team B have their own list of quarterly priorities.
Microservices & alternatives aren't really properties of the technical system in isolation, they also depend on the org chart & which teams owns what parts of the overall system.
SOA: pretty good, microservices: probably not a great idea, microservices without SOA: avoid.
For anyone unfamiliar with SOA, there's a great sub-rant in Steve Yegge's 2011 google platforms rant [1][2] focusing on Amazon's switch to service oriented architecture.
[1] https://courses.cs.washington.edu/courses/cse452/23wi/papers... [2] corresponding HN thread from 2011 https://news.ycombinator.com/item?id=3101876
I'm curious, and the specific list of problems and pain points (if--big if!--everyone there agrees what they are) can help more clearly guide the decisions as to what the next architecture should look like--SoA, monolithic, and so on.
The biggest obstacle is changing people's mindsets from legacy programming to modern DevOps workflow. Making them believe it's worth the effort.
I apologize if that sounds critical; it's not meant to be. Microservices/SoA are often the best available solution given human and technical constraints--I'm not skeptical, just curious.
This whole thread is about how microservices are often abused when they're not necessary, and to clarify my point, I completely agree with that.
My teams pick up a piece of work, check out the code, run the equivalent of docker compose up, and build their feature. They commit to git, merge to dev, then to main, and it runs through a pipeline to deploy. We do this multiple times a day. Doing that with a large monolith that combines all these endpoints into one app wouldn't be hard, but it adds no benefits, and the overhead that now we have 4 teams frequently working on the same code and needing to rebase and pull in change, rather than driving simple atomic changes. Each service gets packaged as a container and deployed to ECS fargate, on a couple of EC2 instances that are realistically a bit oversubscribed if all the containers suddenly got hammered, but 90% of the time they don't, so its incredibly cost effective.
When I see the frequent discussions around microservices, I always want to comment that if you have a disfunctional org, no architecture will save you, and if you have a functional org, basically any architecture is fine, but for my cases, I find that miniservices if you will, domain driven and sharing a persistence layer, is often a good way to go for a couple of small teams.
You have to pull in changes either way. Either there are contract changes between teams or there aren't. If there aren't, you don't need to rebase just do squash and merge. If there are, then you're going to either find out about the changes now or you're going to find out about them in production when your container starts throwing errors.
Every service boundary replaces a function call with a network request. That one choice cascades into distributed transactions, eventual consistency, and operational overhead most teams don't need. ¯\_(ツ)_/¯
Consider this: every API call (or function call) in your application has different scaling requirements. Every LOC in your application has different scaling requirements. What difference does it make whether you scale it all "together" as a monolith or separately? One step further, I'd argue it's better to scale everything together because the total breathing room available to any one function experiencing unusual load is higher than if you deployed everything separately. Not to mention intra- and inter-process comm being much cheaper than network calls.
The "correct" reasons for going microservices are exclusively human -- walling off too much complexity for one person or one team to grapple with. Some hypothetical big brain alien species would draw the line between microservices at completely different levels of complexity.
When people talk about scaling requirements they are not referring to minutiae like "this function needs X CPU per request and this other function needs Y CPU per request", they are talking about whether particular endpoints are primarily constrained by different things (i.e. CPU vs memory vs disk). This is important because if I need to scale up machines for one endpoint that requires X CPU but the same service has another endpoint requiring Y memory whereas my original service only needs Z memory and Y is significantly larger than Z then suddenly you have to pay a bunch of extra money to scale up your CPU-bound endpoint because you are co-hosting it with a memory-bound endpoint.
If all your endpoints just do some different logic and all hit a few redis endpoints, run a few postgres queries, and assemble results then keep them all together!!!
EDIT: my original post even included the phrase "significantly different" to describe when you should split a service!!! It's like you decided to have an argument with someone else you've met in your life, but directed it at me
Eh... I think you can achieve this in a simpler way with asymmetric scaling groups and smartly routing workloads to the right group. I feel like it's an ops/infra problem. There's no reason to constrain the underlying infra to be homogenous.
(To clarify, I’m not disagreeing with you!)
Operationally, it is very nice to be able to update one discrete function on its own in a patch cycle. You can try to persuade yourself you will pull it off with a modular monolith but the physical isolation of separate services provides guarantees that no amount of testing / review / good intentions can.
However, it's equally an argument for SOA as it is for microservices.
Literally one function per service though is certainly overkill though unless you're pretty small and trying to avoid managing any servers for your application.
This split was probably a mistake, as the interface we exposed resulted in us making twice as many DB calls as we actually needed to.
One of the stored procs needed a magic number as a parameter, which we looked up via another DB query.
One of the other Devs on the team tried to convince me to write a separate gRPC server to run this (trivial) query.
"We're doing microservices, so we need to make everything as small as possible. Looking up this value is a separate responsibility from inserting data."
Luckily our tech lead was sane and agreed with me.
Whether it's a program that does something well... or simply a function/procedure --- it all depends on the problem I/we are trying to solve.
I never liked using the word "Microservices" but my aim has always to build SIMPLE solutions. I learn new words in this world. For the most part I am building "Miniservices" but there are a few that are considered "Microservices" but again are not complicated!
I just like to refer it as "Distributed Computing" because the solution can be anywhere between Monolithic or Microservices. Truth is you are building a combination of them that communicate in one form or another.
I will always remember a Till system (past job) that was sending data to the server poorly and slowly with a Monolithic solution and a Database. Was it becoming a pain to handle with new shops being added in Europe? Yes. However, this is NOT the fault of Monolithic. It's just the solution that was used for "good" originally but is struggling now.
The solution I replaced it with allowed data being sent to the server using ZeroMQ. It worked out well.. was fast and reliable. Each section was broken down on the server. Again - is it is perfect solution or does it prove that "Monolithic is worse that Micrsoservices" (or Distributed Computing) -- NO! Truth is our software is a mix of them all!
But it can be hard to encourage adoption. It’s not HTTP, or a conventional queue system.
Requires lots of explanations, thinking, and eventually meetings.
However -- I cannot praise the use of ROUTER-DEALER !! What a GREAT pattern for sending large chunks of data without waiting for a reply for each.
HTTP is not fit for such a task.
However - I totally get you! Trying to explain certain decisions really does take up time and effort. Before you know it, I have lost 4 hours one day, a few another, etc.
In the end you ask if I should have done something mediocre... but everyone understands.
("mediocre" is not the correct word to use. I mean I could have gone with Kafka or RabbitMQ. It's just an extra layer which would have involved infrastructure and further delays (at that time))
I think what changed things is FAAS came along and people started describing nanoservices as microservices which created really dumb decisions.
I've worked on a true monolith and it wasn't fun. Having your change rolled back because another team made a mistake and it was hard to isolate the two changes was really rough.
I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.
But, unless you have some way of enforcing that access between different components happens through some kind of well defined interfaces, the codebase may end up very tightly coupled and expensive or impractical to evolve and change, if shared memory makes it easy for folks to add direct dependencies between data structures of different components that shouldn't be coupled.
You are describing the "microservice architecture" that I currently loathe at my day job. Fans of microservices would accurately say "well that's not proper microservices; that's a distributed monolith" but my point is that choosing microservices does not enforce any kind of architectural quality at all. It just means that all of your mistakes are now eternally enshrined thanks to Hyrum's Law, rather than being private/unpublished functions that are easy to refactor using "Find All References" and unit tests.
Every compiled language has the concept of "interfaces", and can load even compiled modules/assemblies if you insist on them being built separately.
The compiler will enforce interface compliance much better than hitting untyped JSON endpoints over a network.
I have never done this yet.
But I love the idea of it.
I guess I need to learn Javascript at some point.
- the uploader API
- the uploader UI
- the frame API
- the frame UI
UIs are SSG'd with solid-js and solid-start then served with gin.
It's really fun.
The only time I can think where a JVM might be faster is if you have a multi-tenant setup. In that case, the JVM can be more effective with the GC vs having multiple JVMs running.
Uberjars are (typically) extracting all the classes from jar dependencies and combining them into a single jar. That's all gzip compression work.
Container layers are simply saved off filesystem modifications. If you use something like Jib to build your image, then the actual deployable should actually be a lot smaller than what you could do with an uber jar. That's because Jib will put your dependencies in one layer and your application jar in another. Assuming you work like most everyone, then that means the only thing that usually gets transferred is your application code. Dependencies only get sent if you change them or the base image.
3 tier architecture proves time and time again to be robust for most workloads.
Put it into a monorepo so the other teams have visibility in what is going on and can create PRs if needed.
But it is a bit sad that the poster apparently never bought a pizza just for themselves.
The optimum is probably closer to 1 than to 2.
"Traditional" three-tier, where you have a web server talking to an application server talking to a database server, seems like overkill; I'd get rid of the separate application tier.
If your tiers are browser, web API server, database: then three tiers still makes sense.
The current hell is x years of undisciplined (in terms of perf and cost) new ORM code being deployed (SQLAlchemy). We do an insane number of read queries per second relative to our usage.
I honestly think the newish devs we have hired don't understand SQL at all. They seem to think of it as some arcane low level thing people used in the 80s/90s.
That does not mean it never makes sense to split up things. It just means there may be differing definitions of what "micro" means and there are problems where the service domains are neatly seperable and others where they are not (or you won't win anything if you separate them).
Turning a thing into a service is just like turning a thing into a module or it's own function. It can be a good idea or a bad idea depending on circumstances.
So no I don't want microservices (again), but sometimes it's still the right thing.
1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on
2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.
3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.
There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.
The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).
Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.
[1] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
This issue with this is schema evolution. As a very simple example, let's say you have a User table, and many microservices accessing this table. Now you want to add an "IsDeleted" column to implement soft deletion; how do you do that? First you need to add the actual column to the database, then you need to go update every single service which queries that table and ensure that it's filtering out IsDeleted=True, deploy all those services, and only then can you actually start using the column. If you must update services in lockstep like this, you've built a distributed monolith, which is all of the complexity of microservices with none of the benefits.
A proper service-oriented way to deal with this is have a single service with control of the User table and expose a `GetUsers` API. This way, only one database and its associated service needs to be updated to support IsDeleted. Because of API stability guarantees--another important guarantee of good SoA--other services will continue to only get non-deleted users when using this API, without any updates on their end.
> You lose data integrity, joining ability, one coherent state of the world, etc.
You do lose this! And it's one of the tradeoffs, and why understanding your domain is so important for doing SoA well. For subsets of the domain where data integrity is important, it should all be in one database, and controlled by one service. For most domains, though, a lot of features don't have strict integrity requirements. As a concrete though slightly simplified example, I work with IoT time-series data, and one feature of our platform is using some ML algorithms to predict future values based on historical trends. The prediction calculation and storage of its results is done in a separate service, with the results being linked back via a "foreign key" to the device ID in the primary database. Now, if that device is deleted from the primary database, what happens? You have a bunch of orphaned rows in the prediction service's database. But how big of a deal is this actually? We never "walk back" from any individual prediction record to the device via the ID in the row; queries are always some variant of "give me the predictions for device ID 123". So the only real consequence is a bit of database bloat, which can be resolved via regularly scheduled orphan checking processes if it's a concern.
It's definitely a mindshift if you're used to a "everything in one RDBMS linked by foreign keys" strategy, but I've seen this successfully deployed at many companies (AWS, among others).
The difference I generally see with shared-state microservices is that now you introduce a network call (although you have a singular master for your OLTP state), and with isolated state microservices, now you are running into multiple store synchronization issues and conflict resolution. Those tradeoffs are very painful to make and borderline questionable to me without a really good reason to sacrifice them (reasons I rarely see but can't in good faith say never happen).
Pertaining to your IoT example -- that's definitely a spot where I see a reason to move out of the cozy RDBMS, which is an access pattern that is predominated by reads and writes of temporal data in a homogenous row layout and seemingly little to no updates -- a great use case for a columnar store such as Clickhouse. I've resisted moving onto it at $MYCORP because of the aforementioned paranoia about losing RDBMS niceties (and our data isn't really large enough for vertical scaling to not just work) but I could see that being different if our data got a lot larger a lot more quickly.
Maybe putting it together, there are maybe only several reasons I've seen where microservices are genuinely the right tool for a specific job (and which create value even with shared state/distributed monolith):
1) [Shared state] Polyglot implementation -- this is the most obvious one that's given leverage for me at other orgs; being able to have what's functionally a distributed monolith allows you to use multiple ecosystems at little to no ongoing cost of maintenance. This need doesn't happen for me that often given I often work in the Python ecosystem (so being able to drop down into cython, numba, etc is always an option for speed and the ecosystem is massive on its own), but at previous orgs, spinning up a service to make use of the Java ecosystem was a huge win for the org over being stuck in the original ecosystem. Aside from that, being able to deploy frontend and backend separately is probably the simplest and most useful variant of this that I've used just about everywhere (given I've mostly worked at shops that ship SPAs).
2) [Shared state] SDLC velocity -- as a monolith grows it just gets plain heavy to check out a large repository, set up the environment run tests, and have that occur over and over again in CI. Being able to know that a build recipe and test suite for just a subset of the codebase needs to occur can really create order of magnitude speed ups in wall to wall CI time which in my experience tends to be the speed limit for how quickly teams can ship code.
3) [Multi-store] Specialized access patterns at scale -- there really are certain workloads that don't play that well with RDBMS in a performant and simple way unless you take on significant ongoing maintenance burden -- two I can think of off the top of my head are large OLAP workloads and search/vector database workloads; no real way of getting around needing to use something like ElasticSearch when Postgres FTS won't cut it, and maybe no way around using something like Clickhouse for big temporal queries when it would be 10x more expensive and brittle to use postgres for it; even so, these still feel more like "multiple singleton stores" rather than "one store per service"
4) [Multi-store] Independent services aligned with separate lines of revenue -- this is probably the best case I can think of for microservices from a first principles level. Does the service stand on its own as a separate product and line of revenue from the rest of the codebase, and is it actually sold by and operated by the business that independently? If so, it really is and should be its own "company" inside a company and it makes sense for it to have the autonomy and independence to consume its upstream dependencies (and expose dependencies to its downstream) however it sees fit. When I was at AWS, this was a blaringly obvious justification, and one that made a lot of intuitive sense to me given that so much of the good stuff that Amazon builds to use internally is also built to be sold externally.
5 [Multi-store] Mechanism to enforce hygiene and accountability around organizational divisions of labor - to me, this feels like the most questionable and yet most common variant that I often see. Microservices are still sexy and have the allure of creating high visibility career advancing project track records for ambitious engineers, even if at the detriment to the good of the company they work for. Microservices can be used as a bureaucratic mechanism to enforce accountability and ownership of one part of the codebase to a specific team to prevent the illegibility of tragedy of the commons -- but ultimately, I've often found that the forces and challenges that lead to those original tragedy of the commons are not actually solved any better in the move to microservices and if anything the cost of solving it is actually increased.
This makes it clear when you might want microservices: you're going through a period of hypergrowth and deployment is a bigger bottleneck than code. This made sense for DoorDash during covid, but that's a very unusual circumstance
What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).
I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
You should be able to spin up everything localy in a docker-compose container.
> I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
K8s makes sense if you have a dedicated team (or atleast engineer) and if you really need need the advanced stuff (blue/green deployments, scaling, etc). Once it's properly setup it's actually a very pleasant platform.
If you don't need that Docker (or preferable Podman) is indeed the way to go. You can actually go quite far with a VPS or a dedicated server these day. By the time you outgrow the most expensive server you can (reasonable) buy you can probably afford the staff to roll out a "big boy" infrastructure.
We're using Docker/Podman with docker-compose for local development, and I can spin up our entire stack in seconds locally. I can attach a debugger to any component, or pull it out of the Docker and just run it inside my IDE. I even have an optional local Uptrace installation for OTEL observability testing.
My problem is that our deployment infrastructure is different. So I need to maintain two sets of descriptions of our services. I'd love a solution that would unify them, but so far nothing...
Docker Compose for local development is fine. If your K8s setup is crazy complex that you need to test it locally, please stop.
It's trivial with my current setup, but not really possible with Tilt.
With Compose, you get proper n-tier application containerization with immutability. By adding an infrastructure-as-code tool such as Terraform to abstract your IT environment, you can deploy your application on-premises, in the cloud, or at a customer site with a single command.
For clustering needs, there’s Incus, and finally, Kubernetes for very fast scalability (<mn), massive deployments on large clusters, cloud offloading, and microservices.
Almost nobody truly needs the complexity of Kubernetes. The ROI simply isn’t there for the majority of use cases.
And in a few days, we're going to get a long thread about how software is slow and broken and terrible, and nobody will connect the dots. Software sucks because the way we build it sucks. I've had the distinct privilege of helping another team support their Kubernetes monstrosity, which shat the bed around double-digit requests per second, and it was a comedy of errors. What should've otherwise just been some Rails or Django application with HTML templating and a database was three or four different Kubernetes pods, using gRPC to poorly and unnecessarily communicate with each other. It went down all. The. Time. And it was a direct result of the unnecessary complexity of Kubernetes and the associated pageantry.
I would also like to remind everyone that Kubernetes isn't doing anything your operating system can't do, only better. Networking? Your OS does that. Scheduling? Your OS does that. Resource allocation and sandboxing? If your OS is decent, it can absolutely do that. Access control? Yup.
I can confidently say that 95% of the time, you don't need Kubernetes. For the other 5%, really look deep into your heart and ask yourself if you actually have the engineering problems that distributed systems solve (and if you're okay with the other problems distributed systems cause). I've had five or six jobs now that shoehorned Kubernetes into things, and I can confidently say that the juice ain't worth the squeeze.
It would be a blessing if people actually did that, because then they'd avoid useless distributed systems.
> using gRPC to poorly and unnecessarily communicate
At least you've had the blessing of it being gRPC and not having to manually write JSON de/serializers by hand.
> Kubernetes isn't doing anything your operating system can't do
Kubernetes is good if you need to orchestrate across multiple machines. This of course requires an actual need for multiple machines. If you're doing so with underpowered cloud VMs (of which you waste a third of the RAM on K8s itself), just get a single bigger VM and skip K8s.
Which one?
Large, shared database tables have been a huge issue in the last few jobs that I have had, and they are incredibly labor intensive to fix.
It's partly why I've realised more over time that learning computer science fundamentals actually ends up being super valuable.
I'm not talking about anything particularly deep either, just the very fundamentals you might come across in year one or two of a degree.
It sort of hooks back in over time as you discover that these people decades ago really got it and all you're really doing as a software engineer is rediscovering these lessons yourself, basically by thinking there's a better way, trying it, seeing it's not better, but noticing the fundamentals that are either being encouraged or violated and pulling just those back out into a simpler model.
I feel like that's mostly what's happened with the swing over into microservices and the swing back into monoliths, pulling some of the fundamentals encouraged by microservices back into monolith land but discarding all the other complexities that don't add anything.
I actually like close to a full microservice architecture model, once you allow them all to share the database (possibly through a shared API layer).
Why small orgs use microservices: makes it nearly physically impossible to do certain classes of dumb shit
I was stunned... He comes up with this stuff all the time. Thanks Matt.
However I go the other way than you: I have found AI needs as much context as possible and that means it understands monoliths (or fatter architectures) better. At least, the agentic style approach where it has access to the whole git tree / source repository. I find things break down a lot when changes are needed across source repositories.
And, now, SAAS is finally making the jump to the last position - hybrid/mini
For example, I work in a small company with a data processing pipeline that has lots of human in the loop steps. A monolith would work, but a major consideration with it being a small company is cloud cost, and a monolith would mean slow load times in serverless or persistent node costs regardless of traffic. A lot of our processing steps are automated and ephemeral, and across all our customers, the data tends to look like a wavelet passing through the system with an average center of mass mostly orbiting around a given step. A service oriented architecture let us:
- Separate steps into smaller “apps” that run on demand with serverless workers.
- avoid the scaling issues of killing our database with too many concurrent connections by having a single “data service”—essentially organizing all the wires neatly.
- ensure that data access (read/write on information extracted from our core business objects) happens in a unified manner, so that we don’t end up with weird, fucky API versioning.
- for the human in the loop steps, data stops in the job queue at a CRUD app as a notification, where data analysts manually intervene.
A monolith would have been an impedance mismatch for the inherent “assembly line” model here, regardless of dogma and the fact that yes, a monolith could conceivably handle a system like this without as much network traffic.
You could argue that the data service is a microservice. It’s a single service that serves a single use case and guards its database access behind an API. I would reply to any consternation or foreboding due to its presence in a small company by saying “guess what, it works incredibly well for us. Architecture is architecture: the pros and cons will out, just read them and build what works accordingly.”
I want just services.
I had an architect bemoan the suggestion we use a microservice, until he had to begrudgingly back down when he was told that the function we were talking about (Running a CLIP model) would mean attaching a GPU to every task instance.
/s