SOLVED Root Exhaustion

Questions about virtualization software
Forum rules
Before you post read how to get help. Topics in this forum are automatically closed 6 months after creation.
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

SOLVED Root Exhaustion

Post by SayWhat »

HI, ORIGINAL POSTER HERE.
MY PROBLEM PROVED UNRELATED TO VIRTUAL MACHINES.
READING THIS THREAD IS LIKELY WASTING YOUR TIME.

I'M MARKING IT "SOLVED" SO AS TO AVOID ATTRACTING THE ATTENTION AND WASTING THE TIME OF PEOPLE LOOKING TO HELP. TY.

=================

So I've got vm's and gpu passthrough running under mint, and overall, things run beautifully, except... a couple of times I've had hard crashes, with the machine displaying a message "0 bytes free on /root". I've installed on a 1TB nvme, peeled out 100G for a dual boot native win 10, and have 3 VMs set up in the remaining 830-some GB. Doesn't seem like a lot, but it seems I'm out of space.

Aside from the general issue of how does one do space management in linux (I know how to clean up a windows c: drive, but not linux), I'm also wondering how much of this is a linux problem and how much VM related. For example, do the configurations in xml change the disk usage in some way that would cause or mitigate this problem? If this were a regular installation, there are tools to adjust partition sizes if necessary, but in a VM, I'm not aware of any such; do such exist? Does allocating more ram to a vm increase storage demands in /root? Or aside from the performance hit, can I move one VM's image to a different drive to relieve pressure on that m2 stick?

Help, please.
Last edited by SayWhat on Tue Jan 09, 2024 4:11 am, edited 1 time in total.
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

PS, I always do a full shutdown of the client OS's, never use pause. So it shouldn't be saving an image of RAM contents. Could it be doing that anyway, maybe?
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

PPS: I found the qcow2 files, and they total 370 GB, so I'm still confused at how I can be bumping my head on the ceiling with 830 GB available... and what can I do about it?
User avatar
jackkileen
Level 4
Level 4
Posts: 372
Joined: Mon Feb 04, 2019 7:58 pm
Location: Rocky Mtn High; FL Gulf

Re: Root Exhaustion

Post by jackkileen »

What partition/drive do you have Root on?

This is mine ...


Mint Drive.jpg
MINT: 21.3 Cinnamon 6.0.4_Kernel:6.5.0-15-generic - AMD Ryzen 9 5950X 16-Core Processor × 16
MX LINUX: KDE Plasma Version: 5.27.5_Kernel Version 6.1.0-17-amd64 (64-bit): X11
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Now, that's what I'd expect to see, more or less. Instead, when I look at the device with gparted I see the dual boot win partition, and supposedly unallocated space. I'd assume it installed mint somewhere else, but it's the only 1tb device in the machine, so I don't think I committed that blunder. Nonetheless, it shows empty, where mint must be living. So... I'm confused.

Pardon the quality of the screenie; I just now (kindof) figured out how to take one, and for some reason, selection was greyed out.

<edit> here's a better one. (I'm learning this stuff on the fly.)
Attachments
Screenshot from 2024-01-04 13-05-04.png
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

I also can't really understand what's going on. Could you post the output of

Code: Select all

lsblk
? On the Linux host of course.

Do you have a second disk in your computer?

Where does the message appear? On the Linux host?

On Linux you may have to do some cleaning of downloaded cache files. First check the disk space:

Code: Select all

df -h
To clean the cache, use:

Code: Select all

sudo apt autoclean
I get a full root directory on my Linux Mint server every once in a while. You may also want to run

Code: Select all

sudo apt autoremove
to weed out orphaned packages.

If it has to do with the Windows guest:
If you configured your VM using Virtual Machine Manager, can you post the xml configuration? While on "Overview", click the XML tab and select the entire configuration using Ctrl-A. Post it here (you may want to delete the network MAC address and any other identifying name or number).
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
User avatar
jackkileen
Level 4
Level 4
Posts: 372
Joined: Mon Feb 04, 2019 7:58 pm
Location: Rocky Mtn High; FL Gulf

Re: Root Exhaustion

Post by jackkileen »

Yeah, if that's your only drive then it looks like Mint is on the 97GB partition and it's nearly filled to capacity.
Not positive since I don't dual boot but should be able to expand/enlarge that partition into unallocated partition.
MINT: 21.3 Cinnamon 6.0.4_Kernel:6.5.0-15-generic - AMD Ryzen 9 5950X 16-Core Processor × 16
MX LINUX: KDE Plasma Version: 5.27.5_Kernel Version 6.1.0-17-amd64 (64-bit): X11
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Heiko, always glad to see you here! Dumps you requested are attached.

I've already run the apt cleanup commands; they cleared up a few megs only, so no joy there.

I also ran bleachbit (was surprised to discover it was more than a shredder, but a general cleanup tool); it cleared 1 gb of space, but that wasn't enough: next run I got the error message again.

The message is from the host, and once it happens, virt-manager can't even start. I've gotten things going this time as I did the first time this happened, by destroying one of my VMs, and making a new one with 20GB less "disk" space allocated, but I suspect that's only a temporary fix. Did that the first time, and things ran for a few days, then it recurred.

I'm not sure about posting the xml of one or all of the VMs; it's the host that's falling on its face; I'm willing enough, but don't see the connection?

JACK: Oh yes, I have bushels of drives, but am as sure as I can be I installed on the one drive that's 1 TB, given the fact something confusing is going on. That "100" gb partition you see is a windows I can dual boot to, so I can either boot to that normally installed windows, or I can boot to linux with windows VMs as clients. As an illustration, I can run all 3 VMs at once, using only about 50% of CPU and 75% of (128 GB) RAM, and the qcow2 files add up to 370 gb all by themselves. So they couldn't even be stored in that 100 GB partition. Unless they're hiding somewhere else (can't imagine where), they're in that 830 GB that gparted calls "unallocated space".
Attachments
df_h_output.txt
(784 Bytes) Downloaded 50 times
lsblk_output.txt
(1.25 KiB) Downloaded 62 times
User avatar
jackkileen
Level 4
Level 4
Posts: 372
Joined: Mon Feb 04, 2019 7:58 pm
Location: Rocky Mtn High; FL Gulf

Re: Root Exhaustion

Post by jackkileen »

Unless they're hiding somewhere else (can't imagine where), they're in that 830 GB that gparted calls "unallocated space".
Doubtful :) If it's unallocated there aren't any files on it. https://linuxopsys.com/topics/check-una ... pace-linux
Everything is on the active 90+GB partition unless you have another drive.
MINT: 21.3 Cinnamon 6.0.4_Kernel:6.5.0-15-generic - AMD Ryzen 9 5950X 16-Core Processor × 16
MX LINUX: KDE Plasma Version: 5.27.5_Kernel Version 6.1.0-17-amd64 (64-bit): X11
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

Here the relevant part from lsblk:

nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 16M 0 part
└─nvme0n1p2 259:2 0 97.7G 0 part
nvme1n1 259:3 0 1.8T 0 disk
├─nvme1n1p1 259:4 0 371.9G 0 part /media/misa/QUICKSTORE
├─nvme1n1p2 259:5 0 513M 0 part /boot/efi
└─nvme1n1p3 259:6 0 1.5T 0 part /

Your Linux host is on the second NVMe drive (nvme1n1), a 2TB drive. Windows on the first (nvme0n1), the 1TB drive you mentioned.
Your / (root) partition is 1.5 TB, quite a lot, but it's 100% used with currently 5.8GB free:
Filesystem Size Used Avail Use% Mounted on
tmpfs 13G 2.3M 13G 1% /run
/dev/nvme1n1p3 1.5T 1.4T 5.8G 100% /

This is very strange!
Run the following command and post here:

Code: Select all

sudo du -ah --max-depth=1 / | sort -hr | head -n 20
That should give you an idea as to what takes up so much disk space.

You're also missing a swap partition. Here is an example of my Linux partitions:
nvme0n1 259:0 0 465.8G 0 disk
├─nvme0n1p1 259:2 0 512M 0 part /boot/efi
└─nvme0n1p2 259:3 0 465.3G 0 part
├─host-swap 254:0 0 16G 0 lvm [SWAP]
├─host-root 254:3 0 30G 0 lvm /
├─host-home 254:4 0 150G 0 lvm /home
└─host-data 254:6 0 160G 0 lvm /media/heiko/data

As you may notice, I use LVM. But that's irrelevant.
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

What do you use for storage when creating VMs? Do you create qcow files? That could explain your shortage on storage when you create many or large VMs. Run the command I posted before and you should see what's taking up the space.
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Since I barely know what I'm doing with linux, I tend to stick with the defaults. virt-manager creates qcow2's, so I've simply gone with that.

The command line you gave me is running, (and has been for a while). With around 60 TB drives stuck onto this machine, this may take a while. I'll report back when it completes.

I do NOT understand how, when my clear intention was to put everything bootable on the 1TB drive, something else happened. But if it's actually over on the 2TB drive, that would explain things. That drive was originally a straight mirror of the (win 7) image I ran on the 1 TB until just recently. With the new installation, its new task, while not erasing the previous saved data, was to hold recovery data, a few ISOs, like that. I pointed timeshift at it also, which probably exacerbates matters. If mint is in there crowding things, that explains a lot. (I will, however, mumble that there are billions of ppl comfortable with windows; if linux wants to rival that, it shouldn't be so easy to make this sort of mistake and so hard to see it...)

There is a silver lining to this, however; that 100 GB I allocated for dual boot win 10 I thought generous, but its 1 game has eaten up 66 GB, making that installation barely functional. I've not expanded its footprint, because I was thinking mint must be in the space gparted is reporting as unallocated... maybe it really is. In which case, it's safe for me to expand that partition a little before putting mint in where I thought I had put it, in 800 GB on that 1 TB drive.

Anyway, I'm guessing a few hours before I can show output from that command. I'll do that when it completes.
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Oh, it completed faster than I expected, please see attached:
Attachments
Screenshot from 2024-01-04 22-30-01.png
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Ok, so since I now believe the "unallocated space" really is unallocated, I've expanded the dual boot win a bit, and that's now comfortable to run, and I have about 780 GB to house mint. Should be fine. So my task now is to rearrange things so they're the way I THOUGHT they were before. A couple of questions:

As I type, I'm copying off my qcow2 files to safe haven, and I plan to copy/save their XML files as well. Do I need to do or save anything else to move those VMs to a new home?

Can mint be moved from one location to another without too much trauma? If some boot time loader file needs to be edited to launch it in the new location, do you recall where that's located? If it's not brain surgery, I'd really like to move mint rather than reinstall it fresh and tinker things again.

Ty!
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

The problem is Timeshift: It tries to backup your root folder with all the VM files and everything and takes up 1.1 TB. First thing to do is disable Timeshift. Afterwards reconfigure Timeshift to use a different drive (HDD) with enough storage to backup your entire system drive, perhaps multiple times.

With regards to copying/moving your entire Linux Mint installation to another drive, have a look at this thread on the Ubuntu forum.

With regards to moving the /boot/efi partition, see this thread on the Linux Mint forum.

Since copying a Linux installation has nothing to do with virtualization, you should post your question under the "Installation & Boot" subject. There you will draw the attention of more qualified people than me.

Here are my suggestions, which may or may not work - so please be careful and first read the other posts I linked above. The suggestions in the links above are sometimes a lot simpler. The reason I still post the stuff below is that it may give you some understanding and tools to solve the problem:

You have to be careful with the following /boot/efi partition:
├─nvme1n1p2 259:5 0 513M 0 part /boot/efi

Your boot partition is on the second NVMe drive, on the second partition (nvme1n1p2). I'm not sure if you can just copy it somewhere else. Perhaps someone more familiar with that can reply. I suggest you post a separate question in the installation section of the forum.

Before doing anything with the second NVMe drive, copy/backup the /boot/efi folder to another disk! That /boot/efi partition holds everything to boot both Windows and Linux. (Re-)installing Linux is easy, but Windows kinda sucks.

If you are sure that your Windows partition is big enough for your requirements (current and future), you can create a small fat32 primary partition (size 512 MByte) right after the Windows partition and flag it as "EFI system" partition (this will be the new /boot/efi partition). Make sure to format it to FAT32. You can use the "disks" application in Linux Mint to do that, or the command line tool "fdisk".

Following the new /boot/efi partition, create a SWAP partition, any size of between 1-4 GByte should do. Make sure to flag it as "swap" or "Linux swap partition".

Temporarily mount the new 512 MB efi partition (you can use "disks") and copy all the files from your current /boot/efi partition to that new partition. Unmount both partitions. Then get the UUIDs of the newly created partitions, again using disks or the following command line:

Code: Select all

sudo blkid
/dev/mapper/host-root: UUID="very-long-UUID" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme0n1p1: UUID="short-UUID" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI" PARTUUID="long-UUID"
/dev/mapper/host-swap: UUID="very-long-UUID" TYPE="swap"

Above is a shortened example from my computer - yours will be different (you won't have /dev/mapper entries).

Go to /etc and backup the "fstab file. You can use the terminal:

Code: Select all

cd /etc
sudo cp fstab fstab.back
Open your file explorer as root (you need root privileges for the following steps). Now edit the fstab file and change the UUID of the /boot/efi entry to match the one from the new partition on nvme0n1p3 (should be p3, but check). Mine looks like that:
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=123A-4BCD /boot/efi vfat noatime,umask=0077 0 2
UUID=4BCD-123A none swap defaults 0 0

Do the same for the swap partition. Save and close.

Check that everything looks good on drive nvme0n1.Here is an example of my NVMe boot disk:

Code: Select all

sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Samsung SSD 970 EVO Plus 500GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1234ABCD-DCBA1234-xxxxxxxxxxx

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p2 1050624 976773119 975722496 465.3G Linux LVM

Note it is using "GPT" and has the small 512 MB EFI (or UEFI) boot partition. In your case you should also see the swap partition, plus the Windows partitions.

Using the disks application, unmount the old /boot/efi partition nvme1n1p2 and then select "Edit Partition" using the gears button. In there you can select "Hide from firmware" - check that option. This should deactivate that partition and prevent the EFI BIOS firmware from reading it.

I'm not sure, but you may have to run

Code: Select all

sudo update-grub
Make sure you have a bootable Linux Mint stick handy, before you reboot your PC. If the PC doesn't boot, use the USB stick, and reverse the changes by mounting the file systems, deleting the fstab file and renaming fstab.back to fstab. Last not least, again using the disks application, remove the flag from "hide from firmware".

My recommendation: Backup your VMs and reinstall Linux Mint. Make sure that when installing you install to the first NVMe drive. I also suggest backing up all the Windows partitions to another drive.
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
SayWhat
Level 2
Level 2
Posts: 89
Joined: Sat Oct 14, 2023 6:27 pm

Re: Root Exhaustion

Post by SayWhat »

Ok.

Suppose I do all this successfully, and eventually have similar overcrowding issues on the boot drive; can I park a qcow2 on a different device and virt-manager will still list it and launch it?
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

SayWhat wrote: Fri Jan 05, 2024 5:49 am Ok, so since I now believe the "unallocated space" really is unallocated, I've expanded the dual boot win a bit, and that's now comfortable to run, and I have about 780 GB to house mint. Should be fine. So my task now is to rearrange things so they're the way I THOUGHT they were before. A couple of questions:

As I type, I'm copying off my qcow2 files to safe haven, and I plan to copy/save their XML files as well. Do I need to do or save anything else to move those VMs to a new home?

Can mint be moved from one location to another without too much trauma? If some boot time loader file needs to be edited to launch it in the new location, do you recall where that's located? If it's not brain surgery, I'd really like to move mint rather than reinstall it fresh and tinker things again.

Ty!
About the virtual machines. I suppose you used Virtual Machine Manager to create them. To store the xml configuration files, use:

Code: Select all

sudo virsh dumpxml <domain> > domain.xml
For example:

Code: Select all

sudo virsh dumpxml win10 > win10.xml
Do that for each and every VM and copy the resulting xml files to a save place. You can list your VMs (running and shut off) like that:

Code: Select all

sudo virsh list --all
If you have to reinstall Linux, you can open the xml file and copy/paste its content in Virtual Machine Manager.

The xml config files are typically stored in:
/etc/libvirt/qemu/
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
powerhouse
Level 6
Level 6
Posts: 1144
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: Root Exhaustion

Post by powerhouse »

SayWhat wrote: Fri Jan 05, 2024 9:59 am Ok.

Suppose I do all this successfully, and eventually have similar overcrowding issues on the boot drive; can I park a qcow2 on a different device and virt-manager will still list it and launch it?
Yes, but you have to change the config in Virtual Machine Manager. You can do that by selecting the VM, then click "Open", then select the "i" icon. Go to the drive configuration, for example "VirtIO Disk 1", then select the XML tab. (If haven't got the XML tab, go to Edit/Preferences and enable it.)

Then change the path to the drive/file.

Here is how one of my drives looks like:
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none" io="native" discard="unmap" detect_zeroes="unmap"/>
<source dev="/dev/vmvg/ubuntu"/>
<target dev="vda" bus="virtio"/>
<boot order="1"/>
<address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
</disk>

This is a real "raw" partition. Your qcow files will look different. But notice the <source dev="/dev/vmvg/ubuntu"/>.
Subjects of interest: Linux, vfio passthrough virtualization, photography
See my blog on virtualization, including tutorials: https://www.heiko-sieger.info/category/ ... alization/
User avatar
Coggy
Level 5
Level 5
Posts: 642
Joined: Thu Mar 31, 2022 10:34 am

Re: Root Exhaustion

Post by Coggy »

I think that by default, the VM images are all created in /var/lib/libvirt.
You could perhaps mount your new partition on somehwere like /mnt/extra1 then move /var/lib/libvirt to /mnt/extra1/libvirt then symlink /var/lib/libvirt to /mnt/extra1/libvirt. That should let you stash other bits and pieces on /mnt/extra1 as well if you wanted to.
If you do that, add /mnt/extra1 to /etc/fstab (configuration file for which drives get mounted and where) so it gets mounted at boot.
extra1 is an arbitrary name - call it dave it you like.
User avatar
JerryF
Level 16
Level 16
Posts: 6572
Joined: Mon Jun 08, 2015 1:23 pm
Location: Rhode Island, USA

Re: Root Exhaustion

Post by JerryF »

Please, when posting output from Terminal commands, copy and paste them directly into your posts using code tags like this:

Code: Select all

Filesystem      Size  Used Avail Use% Mounted on
tmpfs            13G  2.3M   13G   1% /run
/dev/nvme1n1p3  1.5T  1.4T  5.8G 100% /
tmpfs            63G  8.0K   63G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs            63G     0   63G   0% /run/qemu
/dev/nvme1n1p2  512M   32M  481M   7% /boot/efi
tmpfs            13G  184K   13G   1% /run/user/1000
/dev/sda3       1.9T  1.4T  482G  75% /media/misa/E2F857FBF857CC83
/dev/sdb2       9.1T  7.4T  1.8T  82% /media/misa/10TB
/dev/sdc2        13T   11T  2.7T  80% /media/misa/14TB
/dev/sdd1        13T  8.4T  4.5T  66% /media/misa/NEW14
/dev/sdf2        15T   12T  2.8T  82% /media/misa/16TB X
/dev/sde2        15T   14T  1.2T  93% /media/misa/16TB W
/dev/nvme1n1p1  365G  228G  119G  66% /media/misa/QUICKSTORE
To do that:

Highlight, then copy (Shift+Ctrl+C) the results of the command from the Terminal. Click the </> button from the mini toolbar above the text box where you type your reply. Code tags [code][/code] will be inserted. Place your cursor between the code tags and paste the results of the command between the tags. Example: [code]copied results[/code].

Thanks so much!
Post Reply

Return to “Virtual Machines”