lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180803164150.GA21806@splinter.mtl.com>
Date:   Fri, 3 Aug 2018 19:41:50 +0300
From:   Ido Schimmel <idosch@...sch.org>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     Eran Ben Elisha <eranbe@...lanox.com>,
        David Miller <davem@...emloft.net>, saeedm@...lanox.com,
        netdev@...r.kernel.org, jiri@...lanox.com,
        alexander.duyck@...il.com, helgaas@...nel.org
Subject: Re: [pull request][net-next 00/10] Mellanox, mlx5 and devlink
 updates 2018-07-31

On Thu, Aug 02, 2018 at 03:53:15PM -0700, Jakub Kicinski wrote:
> No one is requesting full RED offload here..  if someone sets the
> parameters you can't support you simply won't offload them.  And ignore
> the parameters which only make sense in software terms.  Look at the
> docs for mlxsw:
> 
> https://github.com/Mellanox/mlxsw/wiki/Queues-Management#offloading-red
> 
> It says "not offloaded" in a number of places.
> 
...
> It's generally preferable to implement a subset of exiting well defined
> API than create vendor knobs, hence hardly a misuse.

Sorry for derailing the discussion, but you mentioned some points that
have been bothering me for a while.

I think we didn't do a very good job with buffer management and this is
exactly why you see some parameters marked as "not offloaded". Take the
"limit" (queue size) for example. It's configured via devlink-sb, by
setting a quota on the number of bytes that can be queued for the port
and TC (queue) that RED manages. See:

https://github.com/Mellanox/mlxsw/wiki/Quality-of-Service#pool-binding

It would have been much better and user friendly to not ignore this
parameter and have users configure the limit using existing interfaces
(tc), instead of creating a discrepancy between the software and
hardware data paths by configuring the hardware directly via devlink-sb.

I believe devlink-sb is mainly the result of Linux's short comings in
this area and our lack of perspective back then. While the qdisc layer
(Linux's shared buffers) works for end hosts, it requires enhancements
(mainly on ingress) for switches (physical/virtual) that forward
packets.

For example, switches (I'm familiar with Mellanox ASICs, but I assume
the concept is similar in other ASICs) have ingress buffers where
packets are stored while going through the pipeline. Once out of the
pipeline you know from which port and queue the packet should egress. In
case you have both lossless and lossy traffic in your network you
probably want to classify it into different ingress buffers and mark the
buffers where the lossless traffic is stored as such, so that PFC frames
would be emitted above a certain threshold.

This is currently configured using dcbnl, but it lacks a software model
which means that packets that are forwarded by the kernel don't get the
same treatment (e.g., skb priority isn't set). It also means that when
you want to limit the number of packets that are queued *from* a certain
port and ingress buffer you resort to tools such as devlink-sb that end
up colliding with existing tools (tc).

I was thinking (not too much...) about modelling the above using ingress
qdiscs. They don't do any queueing, but more of accounting. Once the
egress qdisc dequeues the packet, you give credit back to the ingress
qdisc from which the packet came from. I believe that modelling these
buffers using the qdisc layer is the right abstraction.

Would appreciate hearing your thoughts on the above.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ