lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Jan 2023 09:47:30 +0000
From:   Shai Malin <smalin@...dia.com>
To:     Paolo Abeni <pabeni@...hat.com>,
        Aurelien Aptel <aaptel@...dia.com>,
        "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "sagi@...mberg.me" <sagi@...mberg.me>, "hch@....de" <hch@....de>,
        "kbusch@...nel.org" <kbusch@...nel.org>,
        "axboe@...com" <axboe@...com>,
        Chaitanya Kulkarni <chaitanyak@...dia.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>
CC:     Boris Pismenny <borisp@...dia.com>,
        "aurelien.aptel@...il.com" <aurelien.aptel@...il.com>,
        "malin1024@...il.com" <malin1024@...il.com>,
        Or Gerlitz <ogerlitz@...dia.com>,
        Yoray Zack <yorayz@...dia.com>
Subject: RE: [PATCH v9 01/25] net: Introduce direct data placement tcp offload


On Fri, 20 Jan 2023 at 10:52, Paolo Abeni wrote:
> On Tue, 2023-01-17 at 17:35 +0200, Aurelien Aptel wrote:
> > From: Boris Pismenny <borisp@...dia.com>
> > diff --git a/include/net/inet_connection_sock.h
> b/include/net/inet_connection_sock.h
> > index c2b15f7e5516..2ba73167b3bb 100644
> > --- a/include/net/inet_connection_sock.h
> > +++ b/include/net/inet_connection_sock.h
> > @@ -68,6 +68,8 @@ struct inet_connection_sock_af_ops {
> >   * @icsk_ulp_ops        Pluggable ULP control hook
> >   * @icsk_ulp_data       ULP private data
> >   * @icsk_clean_acked    Clean acked data hook
> > + * @icsk_ulp_ddp_ops    Pluggable ULP direct data placement control
> hook
> > + * @icsk_ulp_ddp_data           ULP direct data placement private data
> >   * @icsk_ca_state:      Congestion control state
> >   * @icsk_retransmits:           Number of unrecovered [RTO] timeouts
> >   * @icsk_pending:       Scheduled timer event
> > @@ -98,6 +100,8 @@ struct inet_connection_sock {
> >       const struct tcp_ulp_ops  *icsk_ulp_ops;
> >       void __rcu                *icsk_ulp_data;
> >       void (*icsk_clean_acked)(struct sock *sk, u32 acked_seq);
> > +     const struct ulp_ddp_ulp_ops  *icsk_ulp_ddp_ops;
> > +     void __rcu                *icsk_ulp_ddp_data;
> 
> The above probably need a
> 
> #if IS_ENABLED(CONFIG_ULP_DDP)
> 
> compiler guard.

Thanks, will be added.

> 
> Have you considered avoiding adding the above fields here, and instead
> pass them as argument for the setup() H/W offload operation?

After researching the implication of such a change, we don’t believe it's right.
This entire work was designed to be based on the sock structure, and this approach
will be needed also for the next part of our work (Tx), in which we will use the 
ops and the queue also from the socket.

We defined the ULD_DDP as a generic layer that can support different 
vendors/devices and different ULPs so using only one ops will make it 
more difficult to maintain from our point of view.

I will also add that we are addressing review comments for 1.5 years in order 
to fine tune this design, and such a change will open the fundamentals.

> 
> I feel like such fields belong more naturally to the DDP offload
> context/queue and currently the icsk DDP ops are only used by the
> offloading driver. Additionally it looks strange to me 2 consecutive
> different set of ULPs inside the same object (sock).

This is by design, both icsk_ulp_ddp_ops and icsk_ulp_ops are needed
in order to allow the future implementation of NVMeTCP offload 
on top of TLS (and any future combination of ULP protocol on top of tls,
mptcp...).

> 
> Thanks,
> 
> Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ