this post was submitted on 31 Jan 2024
56 points (96.7% liked)

Linux

48033 readers
1194 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

What do package managers do? Install packages, obviously! But that is not everything. In my opinion, package managers do enough to be characterized as general automation frameworks. For example:

  • manage configurations and configuration files
  • manage custom compilation options and flags
  • provide isolation or containerization
  • make sure a specific file is present in a specific place given specific conditions
  • change installation files or configuration based on architecture or other conditions

Not all package managers do all of the above, but you get the idea.

Nix even manages your entire setup with a single configuration file.

It occurred to me that package management could theoretically be managed by an automation framework.

What do I mean by automation framework? Ansible, chef, puppet, or Sparrow.

Now imagine if you were to use one of those package managers as an automation framework. For most of them, it would suck. (nix is a notable exception). So maybe common package managers are just bad automation frameworks?

What if we used an automation framework as a package manager? Well currently, it might also suck, but only because it lacks the package definitions. Maybe it is not a bad experiment to have a distribution managed by a modern automation framework like Sparrow.

What are the benefits?

  • usable on other distributions
  • more easily create your own packages on the fly
  • Greater customization and configurability
  • use a known programming language that is easy to read to define packages and other functions, instead of a DSL
  • your package manager can easily automate just about any task using the same syntax and framework
you are viewing a single comment's thread
view the rest of the comments
[–] callcc@lemmy.world 10 points 9 months ago (2 children)

Professional sysadmin here who has been trying to create ansible roles and playbooks to re-create all his VMs.

I have spent a lot of time "packaging" custom web applications (and other stuff) for ubuntu systems and building complex configurations for a system of interacting hosts. Once I had finished writing a role to deploy or update one of those applications, I often found it very hard to use them for maintenance. The biggest problem being that I couldn't remember how to invoke the roles or playbooks to get my desired outcome and what state my systems were in. Another problem with ansible for my usecase is it's slowness. Installing a rather complex package might take minutes on one host.

All in all, I found that I had been doing things the wrong way. Off course, it's nice having all the procedures documented somehow, but if you don't remember what state your machines are in and what tags and roles to apply, it wont be of practical help in your day to day work. My workload is maintaing a bunch of VMs with mostly different sets of packages and config installed, so ansible doesn't play out it strengths of being able to execute things on multiple machines in parallel.

I'm now switching over to a model where I only use ansible to manage installation and configuration tying machines together and where I use debian packaging for, well, packaging. Although it's pretty tough to get into, once you have taken the first hurdles, things fall into place easily. You can do so many things with debian packaging, including installation of custom systemd service units, depend on other packages, distribute customized config files, install custom management scripts. There is even a way to ask questions during installation in an interactive and non interactive way (debconf). Since you target your package for a specific OS and version, you can rely on files being in their usual places (FHS), which makes configuration easy. The nice thing about this model is that I can now use the tools I've been using since ages, to install, update, uninstall, inspect and configure things. On top of that, I could easily distribute our weird to install software to third parties now instead of relying on a broken and long installation procedure.

Sometimes we should just stop reinventing the wheel and just try to understand what previous generations have built (.deb, sql, unix, etc). Sure, the old ways are bad in many ways but they often get the work done.

This being said, I'm happy for people to work on things like nix, guix, ansible etc. They are just not the right tool for my set of skills and problems.

[–] MajorHavoc@programming.dev 3 points 9 months ago (1 children)

I'm now switching over to a model where I only use ansible to manage installation and configuration tying machines together and where I use debian packaging for, well, packaging.

Makes sense. I imagine the push model of Ansible had a lot to do with the speed issues? I can imagine how a solid .deb would be much more performant.

Sure, the old ways are bad in many ways but they often get the work done.

As someone who unapologetically uses Makefiles with even the newest and shiniest tech, I couldn't agree more with this sentiment!

[–] callcc@lemmy.world 2 points 9 months ago (1 children)

Makes sense. I imagine the push model of Ansible had a lot to do with the speed issues? I can imagine how a solid .deb would be much more performant.

It's part of the problem, but the other part is that you have to re-do the package building all the time. Alternatively you fiddle with tags and only run part of your roles (which is a hassle anyways because ansible does not really have good abstractions that help encapsulation).

[–] MajorHavoc@programming.dev 1 points 9 months ago

I've also struggled with Ansible tags, and said good riddance, at least for my use cases.

I ended up breaking my playbooks up into my own relatively small roles, and then reusing those, instead. It's not perfect, but I've been able to feel progress. I still usually make changes, but they're not as invasive as I have found it pretty easy to turn a role on or off.

[–] corsicanguppy@lemmy.ca 0 points 9 months ago (1 children)

ask questions during installation in an interactive and non interactive way (debconf).

Debs always had weak validation, but here you're introducing a consistency weakness, which is the second pillar of enterprise packaging. You're not moving the art forward by kicking out its legs.

[–] callcc@lemmy.world 1 points 9 months ago

Not sure I understand your criticism. Debs definitely help compared to how I was doing things before. Adding some form of parameters (eg. the hostname used by some web application) to the package is necessary and I'd rather have in the form of debconf than having to edit a config after installation.

Do you have an alternative?