A part of the rationale why containers have turn out to be so in style is that they permit a “construct as soon as, run wherever” method to software deployment. With containers, the identical software picture can run in just about any surroundings. Builders don’t need to recompile or repackage the app to assist a number of environments.
That doesn’t imply, nevertheless, that the method of deploying a containerized app throughout totally different environments is definitely the identical. Quite the opposite, it may be fairly totally different relying on elements like which cloud hosts your app and whether or not you handle it utilizing Kubernetes or another orchestration answer.
These environment-specific variations in container administration are value spelling out as a result of they’re usually glossed over in conversations about containers. It’s simple to get misplaced within the attract of the “construct as soon as, run wherever” mantra, with out totally appreciating how a lot the expertise of container deployment can really range between environments.
For that cause, I’d prefer to stroll by way of the important thing methods during which container deployment and administration could be fairly totally different relying on which surroundings and orchestration service you employ. None of those variations make one sort of surroundings “higher” for internet hosting containers than one other, however they’re essential to bear in mind when assessing which expertise and instruments your workforce might want to assist containerized apps within the surroundings you select to deploy them.
Rules That Apply to All Container-Primarily based Deployments
Earlier than discussing environment-specific variations concerning containers, let’s speak about features of container deployment which can be the identical regardless of the place you select to run containerized apps.
One fixed throughout environments is safety rules. You must at all times embrace practices like least privilege (which implies granting containers entry solely to the sources they require, and no extra) to mitigate threat. You must also implement encryption over information at relaxation in addition to information in movement.
Container networking, too, is mostly standardized throughout environments, no less than so far as connections between containers go. (As I clarify under, container community configurations could be totally different on the subject of exposing container ports to outdoors networks, during which case orchestrator-specific networking tooling and integrations come into play.)
You’ll additionally at all times need to handle extra instruments and companies. Regardless of the place you deploy containers, you’ll want to consider offering infrastructure to host them, deploying an orchestration service, balancing community load, and so forth. The precise instruments you employ for these duties can range throughout environments, however the duties themselves are essentially the identical.
How Containers Range Throughout Clouds
Now, let’s speak about variations in container administration between environments, beginning with how the cloud you select for internet hosting your containers impacts the way in which you handle them.
Broadly talking, there are usually not large variations among the many main public clouds — Amazon Net Providers, Microsoft Azure and Google Cloud Platform (GCP) — with regard to container administration. Nevertheless, every cloud does supply totally different takes on container orchestration companies.
For instance, AWS presents each a proprietary container orchestrator, known as Elastic Container Service (ECS), in addition to a Kubernetes-based orchestrator known as Elastic Kubernetes Service (EKS). For his or her half, Azure and GCP primarily supply solely Kubernetes-based orchestration (though Azure helps restricted integrations with sure different orchestrators, comparable to Swarm, by way of Azure Container Instances). Because of this the service you employ to handle your containers might range relying on which cloud hosts them.
Container safety tooling and configurations range between clouds, too. Every supplier’s id and entry administration (IAM) tooling is totally different, requiring totally different insurance policies and position definitions. Likewise, if you happen to configure containers to imagine particular cloud sources — comparable to information inside an Amazon S3 storage bucket or SNS notifications — it can work solely with the cloud platform that gives these sources. For each of those causes, you possibly can’t lift-and-shift container safety insurance policies from one cloud to a different. It is advisable to carry out some refactoring emigrate your app between clouds.
Equally, if you happen to use your cloud supplier’s built-in monitoring and alerting companies (comparable to Amazon CloudWatch or Azure Monitor), your monitoring and observability instruments and processes will range between clouds. That’s very true if you happen to embed cloud-specific monitoring brokers inside containers straight, during which case you’d need to replace the brokers to rehost the containers on a unique cloud with out breaking your monitoring and alerting workflow.
Kubernetes’ Affect Container Administration
In case you decide to make use of Kubernetes to handle containers — which you will or might not wish to do, relying on the distinctive wants of your app — your expertise will even be totally different in key methods as in comparison with most different approaches to container orchestration. That’s as a result of Kubernetes adopts a comparatively distinctive method to configuration administration, surroundings administration and extra.
For instance, as a result of Kubernetes has its personal method to secrets and techniques dealing with, you’ll have to handle passwords, encryption keys and different secrets and techniques for containers operating on Kubernetes in a different way than you’d in different environments.
Community integration, too, appears to be like totally different for Kubernetes-based deployments. Kubernetes helps multiple methods (comparable to ClusterIP and NodePort) of exposing containers to public networks, however they’re all based mostly on ideas and tooling which can be distinctive to Kubernetes. You’ll be able to’t take a networking configuration that you simply created for, say, Docker Swarm and apply it to a Kubernetes cluster.
As one other instance, most groups use surroundings administration instruments which can be purpose-built for Kubernetes, comparable to Helm, for surroundings administration. Kubernetes additionally comes with its personal administration instrument, kubectl.
For all of those causes, working with Kubernetes requires specialised experience — a lot in order that it’s frequent immediately to see enterprises constructing platform engineering groups devoted to Kubernetes. Though the rules behind container administration in Kubernetes would be the identical as these for different orchestrators, the instruments, and practices you’ll want to implement them in Kubernetes are fairly totally different.
Conclusion: Construct As soon as, Configure A number of Instances
Given the appreciable variations that may have an effect on container administration in various kinds of environments, it’s a bit simplistic to think about containers as an answer that frees builders and IT engineers from having to consider host environments.
It’s true which you can usually deploy the identical container picture wherever. However the safety, networking and monitoring configurations and instruments you employ can find yourself trying fairly totally different inside totally different clouds and container orchestrators. You’ll be able to construct your app as soon as, however don’t assume you’ll solely need to configure it as soon as if you wish to deploy it throughout a number of environments.