lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 29 Jun 2020 07:25:53 -0700
From:   Tom Herbert <tom@...bertland.com>
To:     Saeed Mahameed <saeedm@...lanox.com>
Cc:     Boris Pismenny <borisp@...lanox.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>,
        Tariq Toukan <tariqt@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [net-next 04/15] net/mlx5e: Receive flow steering framework for
 accelerated TCP flows

On Sun, Jun 28, 2020 at 11:57 PM Saeed Mahameed <saeedm@...lanox.com> wrote:
>
> On Sat, 2020-06-27 at 15:34 -0700, Tom Herbert wrote:
> > On Sat, Jun 27, 2020 at 2:19 PM Saeed Mahameed <saeedm@...lanox.com>
> > wrote:
> > > From: Boris Pismenny <borisp@...lanox.com>
> > >
> > > The framework allows creating flow tables to steer incoming traffic
> > > of
> > > TCP sockets to the acceleration TIRs.
> > > This is used in downstream patches for TLS, and will be used in the
> > > future for other offloads.
> > >
> > > Signed-off-by: Boris Pismenny <borisp@...lanox.com>
> > > Signed-off-by: Tariq Toukan <tariqt@...lanox.com>
> > > Signed-off-by: Saeed Mahameed <saeedm@...lanox.com>
> > > ---
> > >  .../net/ethernet/mellanox/mlx5/core/Makefile  |   2 +-
> > >  .../net/ethernet/mellanox/mlx5/core/en/fs.h   |  10 +
> > >  .../mellanox/mlx5/core/en_accel/fs_tcp.c      | 280
> > > ++++++++++++++++++
> > >  .../mellanox/mlx5/core/en_accel/fs_tcp.h      |  18 ++
> > >  .../net/ethernet/mellanox/mlx5/core/fs_core.c |   4 +-
> >
> > Saeed,
> >
> > What is the relationship between this and RFS, accelerated RFS, and
> > now PTQ? Is this something that we can generalize in the stack and
>
> Hi Tom,
>
> This is very similar to our internal aRFS HW tables implementation but
> is only meant for TCP state-full acceleration filtering and processing,
> mainly for TLS ecrypt/decrypt in downstream patches and nvme accel in a
> future submission.
>

Saeed,

Receive Flow Steering is a specific kernel stack functionality that
has been in the kernel over ten years, and accelerated Receive Flow
Steering is the hardware acceleration variant that has been in kernel
almost as long (see scaling.txt). If these patches don't leverage or
extend RFS then please call this something else to avoid confusion.

> what this mlx5 framework does for now is add a TCP steering filter in
> the HW and attach an action to it  (for now RX TLS decrypt) and then
> forward to regular RSS rx queue. similar to aRFS where we add 5 tuple
> filter in the HW and the action will be forward to specific CPU RX
> queue instead of the default RSS table.
>
> For PTQ i am not really sure, since i felt a bit confused when I read
> the doc and i couldn't really see how PTQ creates/asks for dedicated
> hwardware queues/filters, i will try to go through the patches
> tomorrow.
>
> > support in the driver/device with a simple interface like we do with
> > aRFS and ndo_rx_flow_steer?
> >
>
> Currently just like the aRFS HW tables which are programmed via
> ndo_rx_flow_steer this TCP Flow table is programmed via
> netdev->tlsdev_ops->tls_dev_add/del(), for TLS sockets to be offloaded
> to HW.
>
> as implemented in:
> [net-next 08/15] net/mlx5e: kTLS, Add kTLS RX HW offload support
>
> But yes the HW filter is is always similar, only the actions are
> different (encrypt or Forward to specific CPU),
>
> So maybe a unified generic ndo can work for TLS, aRFS, PTQ, XSK,
> intel's ADQ, and maybe more. Also make it easier to introduce more flow
> based offloads (flows that do not belong to the TC layer) such as nvme
> zero copy.
>
That's an admirable goal, but I don't see how these patches steer
towards that. The patch set is over 1600 LOC, nearly all of which are
in MLNX driver code. Can some proportion of this code be generalized
and moved in the stack to become common code that other drivers can
use instead of having to recreate this code per each driver that might
want to support advanced offloads?

Tom

> There were lots of talks and discussions by Magnus, Jesper, Bjorn,
> Maxim and many others to improve netdev queue management and make
> networking queues a "first class kernel citizen" I believe flow based
> filters should be part of that effort, and i think you already address
> some of this in your PTQ series.
>
> - Saeed.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ