FS#58375 - [libvirt] Creating pool for disk sd* fails

Attached to Project: Community Packages
Opened by rainer (raneon) - Saturday, 28 April 2018, 11:08 GMT
Last edited by Doug Newgard (Scimmia) - Saturday, 09 June 2018, 05:17 GMT
Task Type Bug Report
Category Packages
Status Closed
Assigned To Christian Rebischke (Shibumi)
Architecture x86_64
Severity High
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 1
Private No

Details

Description:
After the upgrade to libvirt 4.2, I cannot use my disks anymore as a pool for my VM's. I do get in Virt-Manager an error message that it was not possible to create/define the pool: "Fehler beim Erzeugen des Pools: Speicher-Pool konnte nicht definiert werden: internal error: missing backend for pool type 4 (disk)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createpool.py", line 442, in _async_pool_create
poolobj = self._pool.install(create=True, meter=meter, build=build)
File "/usr/share/virt-manager/virtinst/storage.py", line 531, in install
raise RuntimeError(_("Could not define storage pool: %s") % str(e))
RuntimeError: Speicher-Pool konnte nicht definiert werden: internal error: missing backend for pool type 4 (disk)"

Then I removed the pool with Virt-Manager to avoid that this is a bug in the frontend. If I then try to create the new pool in the terminal with virsh (which is part of libvirt according to my understanding), I get a similar error message:
"virsh pool-define-as sda disk - - /dev/sda - /dev
Fehler: Definieren des Pools sda fehlgeschlagen
Fehler: internal error: missing backend for pool type 4 (disk)"

In the past it always worked to create the pool at least with virsh. I didn't try to let virsh format my disk as I don't want to loose the data.

Additional info:
* libvirt 4.2, Linux 4.16.4

Steps to reproduce:
1. Try to create a new pool with a complete disk or try to start the existing disk pool
This task depends upon

Closed by  Doug Newgard (Scimmia)
Saturday, 09 June 2018, 05:17 GMT
Reason for closing:  Fixed
Additional comments about closing:  libvirt 4.4.0-2
Comment by loqs (loqs) - Sunday, 29 April 2018, 16:29 GMT
libvirt_storage_backend_disk depends on parted so add to makedepends and optdepends and add configure option for it so it will fail in future if the module can not be built.
Comment by rainer (raneon) - Sunday, 29 April 2018, 16:52 GMT
My workaround was to downgrade to libvirt 4.0 on all my hosts.

Sounds like the fix to the package build could be the solution. Will this be integrated into the next build?
Comment by loqs (loqs) - Tuesday, 01 May 2018, 17:06 GMT
Might consider changing the configure stanza to be explicit about which of the storage backends that should be built

[ -f Makefile ] || ./configure --prefix=/usr --libexec=/usr/lib/"${pkgname}" --sbindir=/usr/bin \
--without-xen --with-udev --without-hal --disable-static \
--with-init-script=systemd \
--with-qemu-user=nobody --with-qemu-group=kvm \
--with-netcf --with-interface --with-lxc \
--with-storage-dir \
--with-storage-fs \
--with-storage-lvm \
--with-storage-iscsi \
--with-storage-scsi \
--with-storage-mpath \
--with-storage-disk \
--with-storage-rbd \
--with-storage-sheepdog=no \
--with-storage-gluster \
--with-storage-zfs=no \
--with-storage-vstorage=no
# --with-audit
Comment by rainer (raneon) - Tuesday, 01 May 2018, 19:30 GMT
I agree, just now I tested for the first time glusterfs, but I got in Virt-Manager a similar error, and this even with the older libvirt 4.0 package. At least to me it seems that this package needs some additional work, unless I'm not doing anything wrong.

Fehler beim Erzeugen des Pools: Speicher-Pool konnte nicht definiert werden: internal error: missing backend for pool type 10 (gluster)

Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createpool.py", line 442, in _async_pool_create
poolobj = self._pool.install(create=True, meter=meter, build=build)
File "/usr/share/virt-manager/virtinst/storage.py", line 531, in install
raise RuntimeError(_("Could not define storage pool: %s") % str(e))
RuntimeError: Speicher-Pool konnte nicht definiert werden: internal error: missing backend for pool type 10 (gluster)
Comment by Christian Rebischke (Shibumi) - Friday, 04 May 2018, 20:29 GMT
4.3.0 is in community-testing with the patch provided by loqs. Would be nice if you could check this out before I am going to move it to community.
Comment by rainer (raneon) - Saturday, 05 May 2018, 19:47 GMT
Should glusterfs work now as well? For me this didn't work.

But I could bring up my disk pool.
Comment by loqs (loqs) - Saturday, 05 May 2018, 20:37 GMT
Is qemu-block-gluster installed on the system?
Comment by loqs (loqs) - Saturday, 05 May 2018, 22:05 GMT
I was wrong thinking it may be related to the absense of qemu-block-gluster.
 FS#56677  moved libvirt_storage_backend_gluster.so out of libvirt to a new package libvirt-glusterfs but the PKGBUILD never builds that package.
The attached patch addresses that. It does not remove the unused patch.
Comment by loqs (loqs) - Wednesday, 06 June 2018, 18:15 GMT
@Shibumi the split package issue is still unresolved with libvirt 4.4.0-1. Is there more information you require in order to resolve it?
Comment by Christian Rebischke (Shibumi) - Thursday, 07 June 2018, 08:47 GMT
loqs, using a split package makes no sense here.. we have just one build function and seems like compiling both packages (libvirt and libvirt-glusterfs) will break libvirt because it's compiled with glusterfs support.. I am thinking about either: splitting this into two packages OR just serving libvirt without glusterfs
Comment by loqs (loqs) - Thursday, 07 June 2018, 10:44 GMT
Libvirt has shipped with the absent libvirt_storage_backend_gluster.so since 4.1.0-1.
The issue with 4.4.0.1 is libvirt_storage_file_gluster.so is not split out to the currently none existent libvirt-glusterfs.
If you are saying it can not be split see https://src.fedoraproject.org/rpms/libvirt/blob/master/f/libvirt.spec which does exactly that.
For split packages with one build function theres gcc, linux and systemd as examples.
Comment by Christian Rebischke (Shibumi) - Friday, 08 June 2018, 14:48 GMT
I have merged libvirt with libvirt-glusterfs.. can the bug report opener confirm that 4.4.0-2 fixes his issues?
Comment by rainer (raneon) - Friday, 08 June 2018, 16:05 GMT
Hi Christian, thanks for the update and your support. I have upgraded my server with libvirt 4.4.0-2.

libvirtd didn't start anymore after the upgrade. I had to install the package qemu-block-rbd to get libvirtd up and running again, which added the following dependencies:

ceph-13.2.0-1 ceph-libs-13.2.0-1 gperftools-2.7-1 leveldb-1.20-1 lsb-release-1.4-15 oath-toolkit-2.6.2-3 python2-asn1crypto-0.24.0-1 python2-backports-1.0-1 python2-backports.functools_lru_cache-1.5-1 python2-backports.unittest_mock-1.2.1-1 python2-beaker-1.10.0-1 python2-cffi-1.11.5-1 python2-cheroot-6.3.1-1 python2-cherrypy-15.0.0-1 python2-cryptography-2.2.2-1 python2-funcsigs-1.0.2-1 python2-idna-2.6-1 python2-ipaddress-1.0.22-1 python2-jaraco-2017.11.25-1 python2-jinja-2.10-1 python2-mako-1.0.7-1 python2-markupsafe-1.0-1 python2-mock-2.0.0-2 python2-more-itertools-4.2.0-1 python2-pbr-4.0.4-1 python2-pecan-1.3.2-1 python2-ply-3.11-1 python2-portend-2.2-1 python2-prettytable-0.7.2-8 python2-pycparser-2.18-1 python2-pyopenssl-18.0.0-1 python2-pytz-2018.4-1 python2-singledispatch-3.4.0.3-3 python2-tempora-1.11-1 python2-webob-1.8.2-1 python2-werkzeug-0.14.1-2 xmlsec-1.2.26-1 qemu-block-rbd-2.12.0-1

I assume this was intended. Does the libvirt package need updated dependencies? But other than this issue, I was able to get the glusterfs volume mounted with libvirt/Virt-Manager.
Comment by loqs (loqs) - Friday, 08 June 2018, 16:34 GMT
Make qemu-block-gluster qemu-block-iscsi and qemu-block-rbd dependencies? Is qemu-blockcluster a spelling mistake in the opt-depends?

Loading...