lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0fbdcfa6-cbd7-4f09-93b1-40898d5f77d1@oracle.com>
Date:   Tue, 19 Mar 2019 10:22:21 +0800
From:   Dongli Zhang <dongli.zhang@...cle.com>
To:     Jason Wang <jasowang@...hat.com>, Cornelia Huck <cohuck@...hat.com>
Cc:     mst@...hat.com, virtualization@...ts.linux-foundation.org,
        linux-block@...r.kernel.org, axboe@...nel.dk,
        linux-kernel@...r.kernel.org
Subject: Re: virtio-blk: should num_vqs be limited by num_possible_cpus()?

Hi Jason,

On 3/18/19 3:47 PM, Jason Wang wrote:
> 
> On 2019/3/15 下午8:41, Cornelia Huck wrote:
>> On Fri, 15 Mar 2019 12:50:11 +0800
>> Jason Wang <jasowang@...hat.com> wrote:
>>
>>> Or something like I proposed several years ago?
>>> https://do-db2.lkml.org/lkml/2014/12/25/169
>>>
>>> Btw, for virtio-net, I think we actually want to go for having a maximum
>>> number of supported queues like what hardware did. This would be useful
>>> for e.g cpu hotplug or XDP (requires per cpu TX queue). But the current
>>> vector allocation doesn't support this which will results all virtqueues
>>> to share a single vector. We may indeed need more flexible policy here.
>> I think it should be possible for the driver to give the transport
>> hints how to set up their queues/interrupt structures. (The driver
>> probably knows best about its requirements.) Perhaps whether a queue is
>> high or low frequency, or whether it should be low latency, or even
>> whether two queues could share a notification mechanism without
>> drawbacks. It's up to the transport to make use of that information, if
>> possible.
> 
> 
> Exactly and it was what the above series tried to do by providing hints of e.g
> which queues want to share a notification.
> 

I read about your patch set on providing more flexibility of queue-to-vector
mapping.

One use case of the patch set is we would be able to enable more queues when
there is limited number of vectors.

Another use case we may classify queues as hight priority or low priority as
mentioned by Cornelia.

For virtio-blk, we may extend virtio-blk based on this patch set to enable
something similar to write_queues/poll_queues in nvme, when (set->nr_maps != 1).


Yet, the question I am asking in this email thread is for a difference scenario.

The issue is not we are not having enough vectors (although this is why only 1
vector is allocated for all virtio-blk queues). As so far virtio-blk has
(set->nr_maps == 1), block layer would limit the number of hw queues by
nr_cpu_ids, we indeed do not need more than nr_cpu_ids hw queues in virtio-blk.

That's why I ask why not change the flow as below options when the number of
supported hw queues is more than nr_cpu_ids (and set->nr_maps == 1. virtio-blk
does not set nr_maps and block layer would set it to 1 when the driver does not
specify with a value):

option 1:
As what nvme and xen-netfront do, limit the hw queue number by nr_cpu_ids.

option 2:
If the vectors is not enough, use the max number vector (indeed nr_cpu_ids) as
number of hw queues.

option 3:
We should allow more vectors even the block layer would support at most
nr_cpu_ids queues.


I understand a new policy for queue-vector mapping is very helpful. I am just
asking the question from block layer's point of view.

Thank you very much!

Dongli Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ