Hi all, I want to setup a fileserver as a KVM which will access a 2TB disk partition to store its data. In order to do this I saw 5 options:

  1. Attach the whole disk to the VM and access the partition as you do in the host machine. -> contraindicated by the RHEL documentation for security reasons.

  2. Attach only the partition to the VM. Inside the VM, the partition appears as a drive which needs a new partition table. This seems good to me (for reasons I’ll explain later), but I don’t know how the partition-table-inside-a-partition thing works and what implications it comes with.

  3. Create a sparse max-2TB qcow2 image, store it in the physical partition and attach it to the VM. -> rejected by me because the partition inside the qcow2 image needs constant resizing as your storage needs grow.

  4. Create a fully initialized 2TB qcow2 image. -> current way of doing it, no resizes, no security concerns (I guess). The only drawback I perceive is the time required to initialize a 2TB image (~2.5hours in an HDD).

  5. Use the physical partition as NFS. I haven’t really investigated this solution -nor am I experienced with NFS- but to me it seems like it will require some configuration in the host too, which is something I want to avoid because I don’t want to redeploy the host in case shit hits the fan.

So, why 2 seems good to me? Neither resizes as in 3 nor long setup times (image initializing) as in 4.

Is there any other solution that I have missed? If not, out of these, which should I choose?

Sorry for the long, I tried to be as detailed as possible.

  • redcalcium@lemmy.institute
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Create a sparse max-2TB qcow2 image, store it in the physical partition and attach it to the VM. -> rejected by me because the partition inside the qcow2 image needs constant resizing as your storage needs grow.

    I don’t see how this is an issue? If you set the partition to e.g. 1TB, the qcow2 image will automatically resize itself as the drive is filling up, right?

    • Azadi@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yes, the VM will resize it. The problem is the partition inside the image, when I tried this method the image’s actual size was ~200KB so when I tried to make a partition table inside it I was able to create a 200KB partition. I think that when this partition fills or something the VM will reserve more space inside the image that will appear as unallocated space in the guest, it won’t grow the partition automatically. I might have overlooked something though, so I will try this method again.

  • lemmyng@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Donwside to 2: Your VM becomes harder to move between hardware, you lose snapshotting capabilities from a copy-on-write image.

    5 is flexible, but has limitations. For example you wouldn’t want to run databases on NFS volumes.

    If initialization time is the only problem with 4, you could create several smaller images on the disk. Create the first one, initialize the VM and set up an LVM volume on it, then start creating more volumes and extend the LVM volume.

    • Azadi@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There is no need to move the VM nor a need to create many VMs-qcow2 storage images so I guess I can stick with 4.

      The need is to easily redeploy the VM if needed (its / is a different drive than the storage one) and access the files inside the storage drive without hassle. So if I stay with 4 I can redeploy the VM, attach the qcow2 image and restore the fileserver to its pre-redeployment state.

  • SheeEttin@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Create a 2tb virtual disk, whatever that means on your platform, and attach it to the VM. Growing a qcow2 image is trivial (qemu-img resize disk.qcow2 +10G). Yes, you will also have to grow the partition inside the VM, but that’s always going to be true and should also be trivial.

    • Azadi@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The problem aren’t the steps involved but the manual vs automatic side of the resizing. E.g. the partition inside the vm is full and a user tries to send a big file, is it easy to automate the resize of the partition so that the file can fit? If it requires manual intervention I can’t use this solution.

      I think that the qcow2 image doesn’t have to be resized manually; only the partition inside it. When you create the image the size you specify is the max size it is allowed to reach. When I first tried this, I created a max-2TB image, the actual size of the image was 200KB.

      • SheeEttin@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I’m not aware of any on-prem solution that will automatically resize if it needs more space. You could set it up to expand if it hits some low disk space threshold. But if your use case is users randomly sending giant files, consider cloud storage.

        Actually, you might be able to do some kind of object storage on-prem, Ceph or something. Personally I would get some enterprise storage, like a full SAN.

        • Azadi@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          You could set it up to expand if it hits some low disk space threshold

          That’s a good idea, I can be proactive about some things, e.g. it won’t get suddenly more than 30GB of data, so I could maybe resize it once the free space is ~50GB. I’ll look into it, thanks!