[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f62c517e-e25e-ad2f-cf31-cba6639735ad@grimberg.me>
Date: Thu, 27 Oct 2022 13:35:48 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Aurelien Aptel <aaptel@...dia.com>, netdev@...r.kernel.org,
davem@...emloft.net, kuba@...nel.org, edumazet@...gle.com,
pabeni@...hat.com, saeedm@...dia.com, tariqt@...dia.com,
leon@...nel.org, linux-nvme@...ts.infradead.org, hch@....de,
kbusch@...nel.org, axboe@...com, chaitanyak@...dia.com
Cc: smalin@...dia.com, ogerlitz@...dia.com, yorayz@...dia.com,
borisp@...dia.com, aurelien.aptel@...il.com, malin1024@...il.com
Subject: Re: [PATCH v7 00/23] nvme-tcp receive offloads
> Hi,
>
> The nvme-tcp receive offloads series v7 was sent to both net-next and
> nvme. It is the continuation of v5 which was sent on July 2021
> https://lore.kernel.org/netdev/20210722110325.371-1-borisp@nvidia.com/ .
> V7 is now working on a real HW.
>
> The feature will also be presented in netdev this week
> https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains
>
> Currently the series is aligned to net-next, please update us if you will prefer otherwise.
>
> Thanks,
> Shai, Aurelien
Hey Shai & Aurelien
Can you please add in the next time documentation of the limitations
that this offload has in terms of compatibility? i.e. for example (from
my own imagination):
1. bonding/teaming/other-stacking?
2. TLS (sw/hw)?
3. any sort of tunneling/overlay?
4. VF/PF?
5. any nvme features?
6. ...
And what are your plans to address each if at all.
Also, does this have a path to userspace? for example almost all
of the nvme-tcp targets live in userspace.
I don't think I see in the code any limits like the maximum
connections that can be offloaded on a single device/port. Can
you share some details on this?
Thanks.
Powered by blists - more mailing lists