lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 30 Dec 2021 17:50:23 +0100 (CET)
From:   Justin Iurman <justin.iurman@...ege.be>
To:     Ido Schimmel <idosch@...sch.org>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
        dsahern@...nel.org, yoshfuji@...ux-ipv6.org
Subject: Re: [PATCH net-next v2] ipv6: ioam: Support for Queue depth data
 field

On Dec 30, 2021, at 3:47 PM, Ido Schimmel idosch@...sch.org wrote:
> On Mon, Dec 27, 2021 at 03:06:42PM +0100, Justin Iurman wrote:
>> On Dec 26, 2021, at 2:15 PM, Ido Schimmel idosch@...sch.org wrote:
>> > On Sun, Dec 26, 2021 at 01:59:08PM +0100, Justin Iurman wrote:
>> >> On Dec 26, 2021, at 1:40 PM, Ido Schimmel idosch@...sch.org wrote:
>> >> > On Sun, Dec 26, 2021 at 12:47:51PM +0100, Justin Iurman wrote:
>> >> >> On Dec 24, 2021, at 6:53 PM, Ido Schimmel idosch@...sch.org wrote:
>> >> >> > Why 'qlen' is used and not 'backlog'? From the paragraph you quoted it
>> >> >> > seems that queue depth needs to take into account the size of the
>> >> >> > enqueued packets, not only their number.
>> >> >> 
>> >> >> The quoted paragraph contains the following sentence:
>> >> >> 
>> >> >>    "The queue depth is expressed as the current amount of memory
>> >> >>     buffers used by the queue"
>> >> >> 
>> >> >> So my understanding is that we need their number, not their size.
>> >> > 
>> >> > It also says "a packet could consume one or more memory buffers,
>> >> > depending on its size". If, for example, you define tc-red limit as 1M,
>> >> > then it makes a lot of difference if the 1,000 packets you have in the
>> >> > queue are 9,000 bytes in size or 64 bytes.
>> >> 
>> >> Agree. We probably could use 'backlog' instead, regarding this
>> >> statement:
>> >> 
>> >>   "It should be noted that the semantics of some of the node data fields
>> >>    that are defined below, such as the queue depth and buffer occupancy,
>> >>    are implementation specific.  This approach is intended to allow IOAM
>> >>    nodes with various different architectures."
>> >> 
>> >> It would indeed make more sense, based on your example. However, the
>> >> limit (32 bits) could be reached faster using 'backlog' rather than
>> >> 'qlen'. But I guess this tradeoff is the price to pay to be as close
>> >> as possible to the spec.
>> > 
>> > At least in Linux 'backlog' is 32 bits so we are OK :)
>> > We don't have such big buffers in hardware and I'm not sure what
>> > insights an operator will get from a queue depth larger than 4GB...
>> 
>> Indeed :-)
>> 
>> > I just got an OOO auto-reply from my colleague so I'm not sure I will be
>> > able to share his input before next week. Anyway, reporting 'backlog'
>> > makes sense to me, FWIW.
>> 
>> Right. I read that Linus is planning to release a -rc8 so I think I can
>> wait another week before posting -v3.
> 
> The answer I got from my colleagues is that they expect the field to
> either encode bytes (what Mellanox/Nvidia is doing) or "cells", which is
> an "allocation granularity of memory within the shared buffer" (see man
> devlink-sb).

Thanks for that. It looks like devlink-sb would be gold for IOAM. But
based on what we discussed previously with Jakub, it cannot be used here
unfortunately. So I guess we have no choice but to use 'backlog' and
therefore report bytes. Which is also fine anyway. Thanks again for your
helpful comments, Ido. I appreciate.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ