So, I am working on a side project and the way I deploy my (golang) application is basically:
- build binary
- copy binary, config files and static assets to the production server
- do blue green deployment (with nginx) to get zero-downtime deployment
- profit
(This is automated of course! I use Ansible, and I can easily rollback if needed. I can also deploy the same app to multiple machines if needed).
On my local machine I use Docker to test the Go code, but I don't really see the benefit of deploying my Go app in a container. My colleage told me "it's easier to deploy Docker containers. You just pull the image and voila!". I don't see how my approach could be "more complicated". Also, isn't my approach better in terms of performance? If my golang app runs "bare metal" instead of via a container, then sure the performance should be better, right?
"Bare metal" means that your application is running on the same OS that is also running the raw hardware (metal), aka no virtualization. Containers are not (generally) the same as virtualization.
The analog to bare metal is virtualized, where the hardware your program is seeing is not necessarily the hardware that is running on the actual host machine.
A docker container could ostensibly be considered running on bare metal. A container is really just isolation but the parent OS/kernel is in command. Here is a graphic that illustrates the differences: https://www.sdxcentral.com/wp-content/uploads/2019/05/Contai...
What you are really asking is do you need an abstraction layer or orchestration tool to manage doing this.
The short answer is that no you do not need it at all. If you can DIY this and are happy with it, that is sufficient. For example, a current deployment process for one of my clients (EC2 environment) involves stopping a custom systemd service, pulling the new binary/deps and then starting the systemd service. Really simple with a small instant of downtime, but within this environment that is not a problem.