I’m doing a bunch of AI stuff that needs compiling to try various unrelated apps. I’m making a mess of config files and extras. I’ve been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would like to keep them out of my main home directory.
I did Linux From Scratch recently and they have a brilliant solution. Here’s the full text but it’s a long read so I’ll briefly explain it. https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt
Basically you make a new user with the name of the package you want to install. Login to that user then compile and install the package.
Now when you search for files owned by the user with the same name as the package you will find every file that package installed.
You can document that somewhere or just use the find command when you are ready to remove all files related to the package.
I didn’t actually do this for my own LFS build so I have no further experience on the matter. I think it will eventually lead to dependency hell when two packages want to install the same file.
I guess flatpaks are better about keeping libraries separate but I’m not sure if they leave random files all over your hard drive the way apt remove/apt purge does. (Getting really annoyed about all the crud left in my home dir)
deleted by creator
Thanks for the info! I’m definitely gonna look into flatpak.
I built nodejs from source yesterday and it took forever. I’d definitely prefer something huge like that in a flatpak.
That’s clever. It should work on any system, shouldn’t it?
Any POSIX compliant system as far as I know.
Thanks. I’ll keep that in mind for again.
Thanks for the read. This is what I was thinking about trying but hadn’t quite fleshed out yet. It is right on the edge of where I’m at in my learning curve. Perfect timing, thanks.
Do you have any advice when the packages are mostly python based instead of makefiles?
This method should work with any command that’s installing files on your disk but it’s probably not worth the headache when virtual environments exist for python.
Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.
I’ve noticed a few oddball times during installations pip said something like “package unavailable; reverting to base system.” This was while it is inside conda, which itself is inside a distrobox container. I’m not sure what “base system” it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.
I hope someone who has more info comes along. It might be time for you to make a new post though since we’re getting to the heart of the problem now.
Also it will be a lot easier for people to diagnose if you are specific about which programs you are failing to install.
I’ve only experimented with Python in docker and it gave me a lot of headaches.
That’s why I prefer to pip install things inside venvs because I can just tar them myself and have decent portability.
But since your installing files across the system I’m not sure what the best solution is.
Nix
NixOS containers could do what OP’s asking for, but it’ll be trickier with just nix (on other distro). It’ll handle build dependencies and such, but you’ll still need to keep your home or other directories clean some other way.
OP could use flakes to create these dev environments and clean them up without a trace once done.
Any files created by programs running in the dev environments will remain.
nix-collect-garbage
Does NOT delete any files that were written to, for example,
~/.local
or~/.config
from dev shell.One of OP’s problems was,
I’m making a mess of config files and extras.
I use a mixture of systemd-nspawn and different user logins. This is sufficient for experimentation, for actual use I try to package (makepkg) those tools to have them organized by my package manager.
Also LVM thinpools with snapshots are a great tool. You can mount a dedicated LV to each single user home to keep everything separated.
deleted by creator
I have read up on it some, but Fedora does UEFI, secure boot, and a self compiling Nvidia driver that gets built for each kernel update so well that I hesitate to leave. I tried installing the NIX package manager on fedora, but having a user owned directory folder mounted in root is the ugliest thing I’ve ever seen and immediately removed it.
If you’re not comfortable using Nix flakes check out toolbox
I think Podman should do a good job but I never used it myself, Distrobox is build on it and a lot easier to use so that’s what I would recommend!
software like stow keeps track of files installed, and helps you remove it later
Haven’t tried it (and don’t use docker), so a wild shot: https://github.com/jupyterhub/repo2docker
‘repo2docker fetches a repository (from GitHub, GitLab, Zenodo, Figshare, Dataverse installations, a Git repository or a local directory) and builds a container image in which the code can be executed. The image build process is based on the configuration files found in the repository.’
That way you can perhaps just delete the docker image and everything is gone. Doesn’t seem to depend on jupyter…
Have an lxc config that enables glx on x11 in the container, spin one up and throw stuff in there, temp zfs volume.
Lxc-rm when done.
Chroot would be fine for this and not overly complicated
There’s a method using systemd-sysext that would work well for this on any distro without dealing with poking holes in containers. One of the gnome folks blogged about it recently here: https://blogs.gnome.org/alatiera/2023/08/04/developing-gnome-os-systemd-sysext/
export LDFLAGS="-Wl,-rpath=/sw/app/version/lib" ./configure --prefix=/sw/app/version make sudo make install unset LDFLAGS