Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ultimately in the end it means more time for everyone to focus on business logic versus thinking about deployments, so I’m a fan of that.

I find this amusing: our company is migrating to Docker right now, and over in operations I'm spending more and more time thinking about deployments than I've had to since I started doing operations work.

I have to think about "what major functionality has changed in Docker in the last three months, and how does that impact any/all of our images?"

I have to think about creating frequently updated base images, and how to deploy these updated base images across our regions.

I have to think about making the image layers as small as possible, to limit the bandwidth impact of updating images.

I have to think about making as few image layers as possible to avoid DevMapper IO overhead.

I have to think about externally mapped volumes to keep necessary IO as fast as possible.

I have to think about asymmetric TCP, docker vs host network interfaces, natting, and service discovery.

I have to think about private registries, token based authentication systems, and redundancy.

My developers have been able to focus on business logic since they hired an ops team. Will the amount of thinking go down over time? Some of it: the bits which can be automated fade to the back (at least until the core Docker functionality changes yet again), but there are bits and pieces will necessarily change with every deploy.



The difference is the pain you are experiencing with Docker is potentially transient, whereas the problems with configuration management on the wild internet are intractable, and even unquantifiable. With Docker the scope of changes you have to deal with is at least knowable.

I say "potentially" because I think the jury is still out whether Docker can live up to its promise. Clearly it has a real purpose and value add above traditional configuration management, but the churn is an indication of how hard a problem it is. Plus, even in an ideal case, you still have to deal with security updates and other inevitable version updates and things that invalidate your current images.


The big problem is that docker/container image creation is now buried deep inside an app's build system. This means that your build system is now a critical part of your infrastructure.


Isn't that the case with any continuous deployment pipeline?

Best to use the CD system to do the deploy, and avoid "hold my beer, I've got to emergency deploy this via SSH" which is high risk for minimal gain.


No I don't mean pushing patches to your App, I mean critical system libraries.

Because they are baked into the container, to do critical patches you need to rebake the image.

The Docker bit will be mostly painless, unless you've don't something stupid, however the bits around the app can and will changes.

The amount of times I've tried to use a build job from 3-6months ago and it fails horrifically is far too often.


author here.

I'm assuming you have your organization's docker files in soure control and Jenkins, MAYBE you are buildling on top of the Ubuntu base image. So it shouldn't be buried - you should be able to update in source control and rebuild.

(Also, this post is mostly applicable to AMIs).

And 100% absolutely, your build system SHOULD be a critical part of your infrastructure. Not mission critical in that it being down is a problem, but it's how you role out changes.

This also allows you to continuously test the software going into builds, have dependencies, and all of those things.

This is "continuous integration + continuous deployment", but it doesn't have to be continuous. But continuous is a (cough) continuum and there are steps down that road that yield benefits without going all the way.


What do I gain compared to creating a puppet deployment setup, and set my machines from there?

I see no benefit from Docker (but that's why I don't know it well). Any code that does not come directly from the source control is risky, and docker incentives the worst possible building workflow, that is the developer builds everything on his machine, and pass the binaries along for deployment. That's as bad as developing on production.


Well your build system should all be in version control, so that a red herring.

The issue with a developer building something and then throwing it over the wall to ops is one possible workflow, but I think it's a stretch to say that's what Docker "encourages". It only encourages that if you have inexperienced people doing your build system. Docker actually dovetails nicely with DevOps movement to break down those kind of throw-it-over-the-wall silos, and if you have someone experienced and skilled in charge of your build, this poor workflow won't happen.

So what do you gain? Well, you gain lightning fast, reproducible server deployments. You gain production / dev parity. You gain the ability to develop multi-node distributed systems locally without crippling performance overhead. Obviously this all comes at a significant complexity which may exceed your gains (hence why I say the jury is still out), but the problems it solves are very real, and not acequately addressed by pure configuration or VM technology.


> lightning fast

Depending on your bandwidth to the registry (and its availability).

> reproducible

As long as the tag wasn't overwritten by someone.

> production / dev parity

Which has been enabled for years via Vagrant, and by plain virtual machines even before that.

> develop multi-node distributed systems locally

Which won't match production, unless you put in a lot of networking effort in both regions: a set of linked docker containers will behave very differently than a set of docker containers on potentially separate hosts.


I'm not a cheerleader, you don't need to cherry-pick words to off-handedly dismiss. If you don't see the potential advantages that Docker brings to the table in terms of parity, immutability and performance, then you are being intellectually dishonest and there's no point having a discussion.


Yeah, storage seems somewhat rough in particular.

This wasn't meant to be a Docker post. AWS or any cloud that allows messing with load balancers gets you pretty much there too.

However, I like that it provides a localized image builder a lot, and I like docker files themselves a lot, and the idea that it's cross-cloud. I think the other edges can get smoothed out.


Curious this isn't it? It seems that to most application developers docker is merely a faster, different vagrant.. That they have to run in a virtualbox vm because they are on OSX.. The best PaaS experience is pushing code that the PaaS provider then checks out to construct the docker container for you :|

This isn't to say there are not benefits and as a "DevOps Engineer", yay fun stuff people need me to work on!(in addition to security, performance, automation, etc), but hmmm.. There are wins in some areas and added complexity too. Plenty of work to go around.


I don't see this. I think application developers are largely still running out of source control, whether that be in VirtualBox or Fusion or whatever... and Docker files are possibly replacing package build steps.

Maybe it's the case though.

I still think most of the press that Vagrant gets is really "yay, cheap virtualization" attributed to Virtual Box, rather than the workflow - but maybe some people's development environments really are that hard to set up. I like VMware Fusion a bit more.

Still, when a dev env is hard to setup, I like to see automation to do this that doesn't presume I'm running it in a virtualized container. So this could be the same script that a Docker file calls to deploy in production -- whether that's bash, some config tool, etc -- but at least then you are not assuming someone chooses to adopt Vagrant, or is running virtualized, in all cases.

My point in not adopting Vagrant is decidedly minor - I like VMWare Fusion, and I didn't really want to pay for the Vagrant plugin, because my developer machines don't need to be purged that often, and developer-env setup is pretty much running a script, and often the application just runs out of source control. So there's not a lot of dev-env machine rebuild churn.


> Docker files are possibly replacing package build steps

Tomato, Tomato. Both are feeding a configuration file into an external tool, and uploading the results.

> I like VMware Fusion a bit more.

So pay for a Vagrant license and get VMware Fusion as a box provider. Vagrant is a lot more than a nice wrapper around virtual machines.

> I like to see automation to do this that doesn't presume I'm running it in a virtualized container

Which is out, unless you develop on a Linux machine.

> I didn't really want to pay for the Vagrant plugin

And here's the meat of the argument. You don't want to pay for a tool, and so you have a fundamental misunderstanding of what workflows are available once you have that tool, so you make do with the development environment you have.

Sorry to hear that.


Or you could adopt a platform that has made a lot of those opinionated decisions in a pre-integrated config.

There's a chance it might not be a fit but then it's a matter of whether it's easier to start somewhere and tweak it vs. starting ground up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: