Im giving a go fedora silverblue on a new laptop but Im unable to boot (and since im a linux noob the first thing i tried was installing it fresh again but that didnt resolve it).

its a single drive partitioned to ext4 and encrypted with luks (its basically the default config from the fedora installation)

any ideas for things to try?

  • LalSalaamComrade@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    15 days ago

    ext4 is just terrible for the inode issue, because you’ll be forced to reformat and reinstall again. Anyone using NixOS or Guix with multiple store write operation should not go near it.

      • LalSalaamComrade@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        15 days ago

        NixOS and ext4 user here with no problems.

        Yet. Just like most of the articles out there, this problem will start showing up around 5 months to a year, depending on how much storage you have and how much nixos-rebuild switch/guix system reconfigure you use - I had around 512GB, so I ran out of inodes quickly, and despite have lots of storage space, the system was unusable for me.

        Here’s the exact issue that even others have talked about:

        TL:DR; is, your NixOS and Guix system will break due to high inode usage, preventing you to access shell even after clearing older generations. In most cases, you can not even clear older generations, simply because you ran out of inodes. More about filesystem has been discussed here.

        • dhhyfddehhfyy4673@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          15 days ago

          Seems like this can be prevented from reaching that point by properly deleting old generations regularly though right?

          • LalSalaamComrade@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            15 days ago

            No, those inodes still won’t clear on their own - sure, you’ll be able to prolong for a few weeks or months, but then you’ll reach a point where you’ll end up with just a single generation, and you can do nothing to clear space. The device will mislead you with free space, but they are not accessible, neither can you try to force freeing space by running disk operations manually where the stores are present - because a) that’s a bad idea and b) you’ll not have the permission to. That’s what happened to me, and I had to reinstall the entire system again.

            Besides, deleting generations regularly would defeat the point of having a rollback system. Sure, for normal desktop usage, you could live with preserving the last twenty to thirty generations, but ihis may be detrimental for servers that requires the ability to rollback to every generation possible, or low-end platform constrained with space, and therefore, limited generations.

            • mvirts@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              15 days ago

              20 or 30 generations 😹

              I have space for 1 😭

              Edit: you’ve got me worried now, is the behavior you’re referring to normal running out of inodes behavior or some sort of bug? Is this specific to ext4 or does it also affect btrfs nix stores?

              I’ve run across the information that ext4 can be created with extra inodes but cannot add inodes to an existing filesystem.

              • LalSalaamComrade@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 days ago

                From Re: Guix System ext4 index full:

                Vincent Legoll 写道:

                I think the filesystem (or directory) is full of inodes.

                No, but it’s a similar hard limit, and one that not even ‘df -i’ will warn you about.

                Ext4’s dir_index feature uses hash tables to look up directory entries, so that for directories with a very large number of items (like /gnu/store!), the kernel doesn’t have to do the horribly slow equivalent of:

                for i in *; do …; done

                Unfortunately, once that hash table fills up, the premier stable Linux file system just… gives up and refuses to write any more data. In a very cryptic way.

                The large_dir flag ‘increases the limit’ (the man page does not say by how much) but it doesn’t go away.

                Your hash table is full of eels,

                T G-R