Buy me a hot chocolate! | About Me
date: Tue 16 May 21:53:34 CEST 2023
First let me tell you a story:
“Stay awhile and listen…” ;-)
You know Redhat really dropped the ball on all this container stuff. I mean, I can’t blame them for not being able to see the future and inventing an easier to use abstraction over core container primitives already found in the Linux kernel (see namespaces and cgroups) at the time. But what the folks at DotCloud came up with, which eventually became Docker, really was cool! And I wonder if Redhat, as the self proclaimed, leading Linux OS vendor should have come up with it instead.
I mean LXC, leverages the same underlying tech that Docker does, was released in August 2008. Docker as we know it would come out much late in March 2013. So, perhaps then, my attack on Docker not gaining traction is unfounded. Redhat was very much pushing OpenStack back then. A quick Google for mentions of “LXC” from 2008 and 2010 found this article among many others, that references LXC. But even then OpenStack was seemingly very VM-based as shown here. Containers only came later to OpenStack circa October 2018 with Openstack Magnum as referenced here from the Wayback machine provided by Archive.org.
Why all this talk about OpenStack?
It is one of the main precursors to modern container orchestration so I think it’s relevant.
So, in summary, Docker could be considered the second  iteration of LXC/LXD, and a combination of timing, marketing, ease-of-use, etc., caused it to succeed.
 Maybe third. I mean FreeBSD has had Jails for a long, long time, and similar tech from the Solaris folks.
Okay, tangent over. Now as you recall you could define a single YAML based file called a
Dockerfile that looked a bit like this and with it define your ccontainer:
ADD . .
RUN node app.js
Ok, the above Dockerfile is horribad. Do not use it in production. It’s just an example of what one could do
Ok, so from there you’d build this Dockerfile into a container which is a set of immutable layers of data that is packaged up into, well, containers and often named something useful.
> docker build . -t my-container:22.04
The above would build your container which would have all your code and tools etc just right that could then be shared or run on a server or a container orchestrator like Kubernetes.
Then imagine you had an application that actually requires a database, a webserver, and other tech, like the LAMP stack. The LAMP stack was big in the early web: Linux + Apache + MySQL + PHP got you basically everything you needed to run a tiny dynamic webpage or start Facebook for example.
The folks at Docker came out with this tool called Docker Compose which built on the idea of Dockerfiles, still based in YAML, where you could define services instead of individual Container definitions.
The above defines (again I did this from memory, it’s probably not to best practices, do not use this in production, it’s not even complete!) two services:
my-php-apache-container:v1.0 which, in this case, probably has all the PHP assets baked into the container for simplicity.
They each have their own networks and a common practice was to isolate layers such that the backend services were not exposed to the outside world except by the front end services that would talk to them.
Here that’s defined in the
network/networks relationship. The
my-php-apache-container:v1.0 service can talk to the db but the db service cannot talk to the web service, in other words, if the db service somehow got compromised it’d be very hard to then talk to the web service due to this network segmentation.
So, now you’ve got your large monolithic application will all its depenencies working, tests are running against real databases not being mocked, and you can take all of this and then just take the definitions, the declarative YAMLS, and ship them to production to run on faster/stronger hardware.
Basically this ACTUALLY let’s a dev ship their computer to the client: if done right, the container tested and running as expected on the dev machine will perform the same on the target because of the immutable nature of the container.
I am gleaning over a lot of detail and history here to just make a point…
Okay, so you now probably see the point: Docker was awesome.
Yes, yes. I know all of this! I have Docker installed already…
Then why did it take the industry so long to standardize on a container format? And for Redhat to really own the space as they were the preeminent Linux server OS vendor?
I’m not sure. But here we are. We’ve got a standard for containers. Docker as a company never really made sense. What did they offer really? The userspace code (written in Go) that made the docker commandline tools so powerful were open source. Putting it behind a paywall or a license would just mean enterprising developers would copy the interface much like competing tools to S3 copy the S3 API and release implementations that are more or less drop-in replacements.
What Docker did was make the idea of using containers and sharing them and using them in production known and to make it easy to do.
Because it runs basically everything in userspace and it makes it easy to do so whereas running rootless Docker is a bit of a hassle. Also I think the ecosystem is moving away from Docker and Podman – developed by Redhat – has the full backing of Redhat/IBM whereas (and this is unconfirmed FUD) I am doubtful of the future of Docker, Inc. Even if the company behind the development of Docker were to cease operations the community would take it over. In any case I think Podman is cool, probably a bit more secure , and has a really cool ecosystem behind it.
remember having to add your user to the Docker group so you’d not have to run
sudo docker all the time?
That probably didn’t win you over but just keep reading in case you want to get Podman running on Ubuntu anyway :-)
sudo apt install podman
sudo systemctl enable podmand
sudo systemctl start podman
podman run -it hello-world <– look no sudo and no adding of your user to the docker group … and no faffing about with configuring rootless mode
If you’re running podman not as the user that you logged in with, then in this world of SystemD everything, you’ll have to do this:
> sudo loginctl enable-linger $UID
From the man page of
Enable/disable user lingering for one or more users. If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows
users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering
for the user of the session of the caller.
Which is needed to get around issues with running the Podman service I believe.
HOST DIRECTORY CANNOT BE EMPTY
This one stumped me too. But after some googling I found the answer in a GitHub issue:
For me the environment variable
XDG_RUNTIME_DIR was not set. It was empty. So Podman was unable to store metadata it needs to work properly.
The fix is easy:
You should probably add that to your
.zshrc file so you’ll have it set everytime.
Now if you really wanna have some fun you can play around with toolbox which will make it trivial to run any kind of app from a container from graphical apps like browsers to commandline tools all while it maps in your home directory meaning you have all your configs and tools handy!
I will follow-up on this with an article just on toolbox.
Epilogue: I still use containers today to make development easier. I find them much more ergonomic than spinning up entire Virtual Machines (looking at you Hashicorp Vagrant or QEMU/KVM). So, if you like me, have moved to Podman from Docker, or are wanting to but things like Docker Compose are holding you back, you’re in luck. There’s Podman-Compose!
Go tinker, try it out. Email me with your thoughts.
Thank you for reading.
Go Back Home
Copyright Alex Narayan - 2023