lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 4 Nov 2022 13:15:39 -0500
From:   Nick Child <nnac123@...ux.ibm.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     netdev@...r.kernel.org, nick.child@....com, bjking1@...ux.ibm.com,
        ricklind@...ibm.com, dave.taht@...il.com
Subject: Re: [PATCH v2 net] ibmveth: Reduce maximum tx queues to 8



On 11/4/22 12:59, Jakub Kicinski wrote:
> On Fri, 4 Nov 2022 09:06:02 -0500 Nick Child wrote:
>> On 11/3/22 22:59, Jakub Kicinski wrote:
>>> On Wed,  2 Nov 2022 13:38:37 -0500 Nick Child wrote:
>>>> Previously, the maximum number of transmit queues allowed was 16. Due to
>>>> resource concerns, limit to 8 queues instead.
>>>>
>>>> Since the driver is virtualized away from the physical NIC, the purpose
>>>> of multiple queues is purely to allow for parallel calls to the
>>>> hypervisor. Therefore, there is no noticeable effect on performance by
>>>> reducing queue count to 8.
>>>
>>> I'm not sure if that's the point Dave was making but we should be
>>> influencing the default, not the MAX. Why limit the MAX?
>>
>> The MAX is always allocated in the drivers probe function. In the
>> drivers open and ethtool-set-channels functions we set
>> real_num_tx_queues. So the number of allocated queues is always MAX
>> but the number of queues actually in use may differ and can be set by
>> the user.
>> I hope this explains. Otherwise, please let me know.
> 
> Perhaps I don't understand the worry. Is allowing 16 queues a problem
> because it limits how many instances the hypervisor can support?

No, the hypervisor is unaware of the number of netdev queues. The reason
for adding more netdev queues in the first place is to allow the higher
networking layers to make parallel calls to the drivers xmit function,
which the hypervisor can handle.

> Or is the concern coming from your recent work on BQL and having many
> queues exacerbating buffer bloat?

Yes, and Dave can jump in here if I am wrong, but, from my 
understanding, if the NIC cannot send packets at the rate that
they are queued then these queues will inevitably fill to txqueuelen.
In this case, having more queues will not mean better throughput but
will result in a large number of allocations sitting in queues 
(bufferbloat). I believe Dave's point was, if more queues does not
allow for better performance (and can risk bufferbloat) then why
have so many at all.

After going through testing and seeing no difference in performance
with 8 vs 16 queues, I would rather not have the driver be a culprit
of potential resource hogging.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ