The article is totally buzzword-compliant. It's an important subject being addressed poorly.
Facebook is internally architected something like that. Each Facebook page display involves about a hundred machines. The internal message passing isn't REST, though; it's a real RPC system. Many of the components are written in PHP, and Facebook has a PHP compiler to speed things up.
Security is really tough in such systems. Exactly who's trusting whom, and who authenticates what, is a tough issue. Vunerabilities are usually of the form "A cannot do X, but B can do X, and A can talk to B. Can A induce B to do X?"
More fundamentally, we're still not very good at inter-process communication. There's REST/JSON, which means a lot of parsing. SOAP is generally considered too clunky. Google uses "message buffers" internally, which requires running code through special pre-compilers that understand message definitions. Microsoft, of course, has several systems of their own.
Most OSs don't support message passing well. What you usually want is an inter-process subroutine call. What the OS usually gives you is an I/O operation. (QNX gets this right, but only real-time programmers care. Message passing came late to the UNIX/Linux world, and is decidedly an inefficient afterthought there.
Whether this has much to do with how your development teams are organized, or whether you go in for the "DevOps" mentality (which is usually an excuse for not having a competent operations staff), isn't clear.
Well, microservices is the new buzzword. Sadly most people who venture down this path do it using technology they already know: HTTP and REST with JSON even though, once you have a lot of services, that adds up to a lot of overhead.
But ignoring the protocols used for a second, another major problem I see with a lot of microservices architectures are the complete lack of transparency to the operations team or developers once you spin up the system.
What happens if a request gets stuck somewhere in the pipeline? Who will know about it? If it's HTTP REST and the request times out owing to poor request timeout tuning then you have to cascade that failure up the propagation "chain" to the original caller -- something a lot of projects fail to do properly.
Security, as you mentioned, is another bug bear. Sure you log in to the "login service" but do you pass a token around with each request -- again, a lot of teams just superficially add this to the front-end facing service(s) only.
Debugging and logging is really difficult to pull off well. The best I've seen was traceable logging that carried over between microservices by fastidious use of "log namespacing" and timestamping to ensure that server time drift didn't screw up the ordering over time.
Bleh. It's just hard to get right and working but because it's such an easy thing to get started with most don't notice these things until it's too late and you're too invested.
Does anyone have experience with using the microservices style for development, but deploying them bundled into a larger unit (e.g., a single Docker image or virtual machine)?
Cloud Foundry is implemented as a bunch of microservice components (authentication, public API, notifications, logging, metrics) that are deployed together using BOSH. Deployment becomes the big issue with microservices, and something like CF or BOSH allows you to quickly deploy a bunch of VMs, containers, or apps that represent your whole system.
There's still advantages to separating things out, even if you often run them all together on the same machine or container.
It's true that a datacenter network is slower than RAM, of course, but if you're already dealing with internet latencies, an extra roundtrip within the data center is hard to even measure - see "Latency numbers every programmer should know", http://www.eecs.berkeley.edu/~rcs/research/interactive_laten....
(I really don't like this fact, aesthetically, but it's usually good business to acknowledge it.)
Sure, for most people it isn't an issue. But sometimes it is. E.g. if you're hosting some kind of product that collects or inserts data into customer's websites.
If you are really aggressive about performance you'll have data centers within 100-200ms of all your customers. If you do this, and you buy into the microservices, it doesn't take many inter-service calls to exceed the network latency. On the TechEmpower benchmarks most frameworks struggle to return a static bit of JSON in <50ms. Now imagine 10 services communicating to fulfill a request...
I don't think bidirectional TLS is enough in many cases. Defense in depth is required. You need to ensure that when services access other services, they aren't granted wide open privileges because you (hopefully, still) own them.
I would add a reasonable authentication and authorization model to this list of prereqs.
Facebook is internally architected something like that. Each Facebook page display involves about a hundred machines. The internal message passing isn't REST, though; it's a real RPC system. Many of the components are written in PHP, and Facebook has a PHP compiler to speed things up.
Security is really tough in such systems. Exactly who's trusting whom, and who authenticates what, is a tough issue. Vunerabilities are usually of the form "A cannot do X, but B can do X, and A can talk to B. Can A induce B to do X?"
More fundamentally, we're still not very good at inter-process communication. There's REST/JSON, which means a lot of parsing. SOAP is generally considered too clunky. Google uses "message buffers" internally, which requires running code through special pre-compilers that understand message definitions. Microsoft, of course, has several systems of their own.
Most OSs don't support message passing well. What you usually want is an inter-process subroutine call. What the OS usually gives you is an I/O operation. (QNX gets this right, but only real-time programmers care. Message passing came late to the UNIX/Linux world, and is decidedly an inefficient afterthought there.
Whether this has much to do with how your development teams are organized, or whether you go in for the "DevOps" mentality (which is usually an excuse for not having a competent operations staff), isn't clear.