lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e4afe4c5-0262-4500-aeec-60f30734b4fc@default>
Date:   Tue, 12 Mar 2019 10:22:46 -0700 (PDT)
From:   Dongli Zhang <dongli.zhang@...cle.com>
To:     <virtualization@...ts.linux-foundation.org>,
        <linux-block@...r.kernel.org>
Cc:     <mst@...hat.com>, <axboe@...nel.dk>, <jasowang@...hat.com>,
        <linux-kernel@...r.kernel.org>
Subject: virtio-blk: should num_vqs be limited by num_possible_cpus()?

I observed that there is one msix vector for config and one shared vector
for all queues in below qemu cmdline, when the num-queues for virtio-blk
is more than the number of possible cpus:

qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=6"

# cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3
... ...
 24:          0          0          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          0          0          0         59   PCI-MSI 65537-edge      virtio0-virtqueues
... ...


However, when num-queues is the same as number of possible cpus:

qemu: "-smp 4" while "-device virtio-blk-pci,drive=drive-0,id=virtblk0,num-queues=4"

# cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3
... ... 
 24:          0          0          0          0   PCI-MSI 65536-edge      virtio0-config
 25:          2          0          0          0   PCI-MSI 65537-edge      virtio0-req.0
 26:          0         35          0          0   PCI-MSI 65538-edge      virtio0-req.1
 27:          0          0         32          0   PCI-MSI 65539-edge      virtio0-req.2
 28:          0          0          0          0   PCI-MSI 65540-edge      virtio0-req.3
... ...

In above case, there is one msix vector per queue.


This is because the max number of queues is not limited by the number of
possible cpus.

By default, nvme (regardless about write_queues and poll_queues) and
xen-blkfront limit the number of queues with num_possible_cpus().


Is this by design on purpose, or can we fix with below?


diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4bc083b..df95ce3 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
 	if (err)
 		num_vqs = 1;
 
+	num_vqs = min(num_possible_cpus(), num_vqs);
+
 	vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
 	if (!vblk->vqs)
 		return -ENOMEM;
--


PS: The same issue is applicable to virtio-scsi as well.

Thank you very much!

Dongli Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ