Anybody else seeing network problems with Boxes/Libvirt VMs on Fedora 44?

I imported an existing disk image and tried to create a new from a Ubuntu ISO and neither get NAT network access.

If so please add any details from here:

https://bugzilla.redhat.com/show_bug.cgi?id=2466836

#Fedora #libvirt

2466836 – imported VM not getting NAT access to network

I've imported a VM disk image into Fedora 44 boxes (I forgot to export the XML) and while it has got a IP address on the default network it doesn't appear to have access to the outside world (no NAT).

Anybody know how to enable it?

#Fedora44 #Fedora #libvirt

Ansible roles: proxy_env, ssh, etc_hosts, libvirt released

https://blog.wagemakers.be/blog/2026/05/03/ansible-proxy_env-ssh-etc_hosts-libvirt_released/

Made some time to do some work for a few #ansible roles that I maintain. You’ll find the new releases in this blog post.

* stafwag.proxy_env 2.1.0
* stafwag.ssh 1.1.1
* stafwag.libvirt 2.1.0
* stafwag.etc_hosts 1.1.1

#ansible, #libvirt #ssh #linux #freebsd

#stafwag @stafwag

PSA for anyone using #QEMU #KVM for #SingleGPUPassthrough
Guides are all over the net suggesting hooks scripts invoking pci devices, unloading/loading kernel modules, and other unnecessary things. Here's my current hook script for starting the VM:

systemctl stop display-manager

That's it. And the reverse for teardown:

systemctl start display-manager

#IOMMU groups still apply, and you need to pass the correct PCI devices to your VM, but everything else is handled automatically.

Disclaimer that this is how it works currently for my AMD card. I did have a working setup with my NVIDIA card that did unload/load kernel modules, however it seems things have come a long way since I set that up.

#virtualization #libvirt #vfio

TIL you can just set a connection parameter to use #libvirt's read-only socket, i.e.: qemu:///system?socket=/var/run/libvirt/libvirt-sock-ro 🙂
Monitoring tool doesn't like to use virsh's --readonly parameter :/

Asking the same question here as well.

When using libvirtd/qemu+kvm what is your preferred way for sharing a folder with a huge dataset >50TB into multiple guests especially in the case where you want the VM to be network isolated.

Are you using:
* the filesystem-"device": virtiofs, virtio-9p, mtp
* the disk-device: with dir-source
* link-local network with: NFS, SMB, ...
* a different architecture entirely?

See full question: https://www.reddit.com/r/HomeServer/comments/1skp5u5/storage_pass_through_without_to_libvirtqemu_vm/

#qemu #kvm #libvirt #NixOS #Homelab

Was messing around with the ansible libvirt collection to deploy some molecule instances. Turns out the virt_install module forgot to add the #cloud-config header to user-data when converting it from dictionary to string. Therefor my cloudinit user never got created. Currently preparing a pull request but I am stuck a bit on the test setup.

#ansible #molecule #python #libvirt #automation #cloudinit

Wow, the #libvirt #zfs integration sure is a giant foot-gun (definitely not helped by the fact that both libvirt and zfs use the same terms to mean similar, but not the same, things).
Anyway, since the professionals have come to the conclusion that raw files on zfs perform better than zvols anyway, it probably makes more sense to use a dataset as a "dir" pool type than a "zfs" pool type.

Si feu servir màquines virtuals amb libvirt/qemu/kvm i vagrant, podeu canviar el comportament de vagrant suspend per a que guardi l'estat de la màquina en comptes de pausar l'execució:

libvirt.suspend_mode = "managedsave"

Abans de canviar-ho assegureu-vos de que la màquina estigui apagada.

Podeu fer el mateix amb el programa virt-manager (clic dret a la màquina > Shut Down > Save) o des de la consola amb virsh:

virsh managedsave "id"

#linux #vm #libvirt #kvm #vagrant

No more. No less.

```console
$ freebsd-version ; grep -c ^WITHOUT_ /etc/src.conf ; kldstat
15.0-STABLE
39
Id Refs Address Size Name
1 1 0xffffffff80200000 11af2f8 kernel
```

#FreeBSD #libvirt #QEMU