+1 ... You do (3) until someone hires a "data engineer" straight out of a bootcamp and tells everyone above you that the team is not following "modern practices" and using archaic approaches.
Then, you get pulled into regular meetings to explain the time line for migrating to a proper structure.
Now, there will come a time where you'll need to automate this. If/when that time comes, if you already have decent cloud competency, then it is far easier to tie everything using S3, SNS, SQS, Lambda (or other equivalent) than going the Kubernetes/Kafka route.
I think there's another side to this coin: very small software teams moving quickly in markets where truly senior devs are scarce. It makes far more engineering and business sense to invest in cloud infrastructure than it is to build and control all of those systems in-house. I can hire a middle-of-the-road developer and trust they've (at the very least) heard of the AWS or GCP tools/services we're using, but if I wrote my own systems in clojure/elixir/whatever (even though that's what I'd prefer to do), then there's nowhere near the likelihood that the new engineer will know what to do, and it'll take months to train them up to even a basic level of competency. You can make all sorts of "it's better in the long run" kind of arguments, but those don't help when the C suite says "yeah sure, maybe that's the best approach, but we need to get this done right now." That's where clicking a few buttons to spin up a load balancer in front of a handful of serverless handlers becomes rather nice.
Then, you get pulled into regular meetings to explain the time line for migrating to a proper structure.
Now, there will come a time where you'll need to automate this. If/when that time comes, if you already have decent cloud competency, then it is far easier to tie everything using S3, SNS, SQS, Lambda (or other equivalent) than going the Kubernetes/Kafka route.