In a recent post (Docker in Production: A History of Failure ) I shared my thoughts about using Docker in production, based mainly on a widely spread blog post by the HFT guy.
His post really brought up a storm of responses, where i think the best captured one is the following post.
TL;DR: Docker can be used in production, if you take precautions and keep on top of tooling.
I think this the main issue of the two approaches - would you prefer to be “cutting edge” and have the most recent tooling (with all of their disadvantages) in your production? or would you prefer to have the most stable production environment around?
CI/CD vs. Traditional Approaches
Unsurprisingly this debate is reminiscent of the debate in the CI/CD community with traditionalists. Should you fail in production (albeit to a small degree) and ship fast? or keep production stable, but devoid of recent advancements and feature progression?
I believe that Docker (and container technology in general) have unique properties that can make your production environment less brittle, if deployed in tandem with other architectural approaches.
Let’s consider the following example:
- Microservices architecture
- Services communication either directly through RESTful APIs or indirectly through messages brokers (e.g. Kafka)
- Deployment automation (e.g. Jenkins or Travis CI)
- Robust Testing through unit-tests and end to end tests
all things being equal - I believe that Docker would greatly supplement the above example.
- Developers and DevOps teams would share the same container (or deployment environment).
- No need to lost time over dependency tracking (already encapsulated in Docker) etc.
- Rolling back when there is an error is easy (or even better, use a Blue/Green deployment approach)
These advantages are worth considering, if your system is designed for it. on the other hand, if your system isn’t designed for this this might cause issues and strife.
There is no point chasing the newest technology without evaluating the fit for your usecase first