[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YL84VamVh78Ds2Eg@alley>
Date: Tue, 8 Jun 2021 11:28:53 +0200
From: Petr Mladek <pmladek@...e.com>
To: Shai Malin <smalin@...vell.com>
Cc: netdev@...r.kernel.org, linux-nvme@...ts.infradead.org,
davem@...emloft.net, kuba@...nel.org, sagi@...mberg.me, hch@....de,
axboe@...com, kbusch@...nel.org, aelior@...vell.com,
mkalderon@...vell.com, okulkarni@...vell.com,
pkushwaha@...vell.com, malin1024@...il.com,
Dean Balandin <dbalandin@...vell.com>
Subject: Re: [RFC PATCH v5 01/27] nvme-tcp-offload: Add nvme-tcp-offload -
NVMeTCP HW offload ULP
On Wed 2021-05-19 14:13:14, Shai Malin wrote:
> This patch will present the structure for the NVMeTCP offload common
> layer driver. This module is added under "drivers/nvme/host/" and future
> offload drivers which will register to it will be placed under
> "drivers/nvme/hw".
> This new driver will be enabled by the Kconfig "NVM Express over Fabrics
> TCP offload commmon layer".
> In order to support the new transport type, for host mode, no change is
> needed.
>
> Each new vendor-specific offload driver will register to this ULP during
> its probe function, by filling out the nvme_tcp_ofld_dev->ops and
> nvme_tcp_ofld_dev->private_data and calling nvme_tcp_ofld_register_dev
> with the initialized struct.
>
> The internal implementation:
> - tcp-offload.h:
> Includes all common structs and ops to be used and shared by offload
> drivers.
>
> - tcp-offload.c:
> Includes the init function which registers as a NVMf transport just
> like any other transport.
>
> Acked-by: Igor Russkikh <irusskikh@...vell.com>
> Signed-off-by: Dean Balandin <dbalandin@...vell.com>
> Signed-off-by: Prabhakar Kushwaha <pkushwaha@...vell.com>
> Signed-off-by: Omkar Kulkarni <okulkarni@...vell.com>
> Signed-off-by: Michal Kalderon <mkalderon@...vell.com>
> Signed-off-by: Ariel Elior <aelior@...vell.com>
> Signed-off-by: Shai Malin <smalin@...vell.com>
> Reviewed-by: Hannes Reinecke <hare@...e.de>
> --- a/drivers/nvme/host/Kconfig
> +++ b/drivers/nvme/host/Kconfig
> @@ -84,3 +84,19 @@ config NVME_TCP
> from https://github.com/linux-nvme/nvme-cli.
>
> If unsure, say N.
> +
> +config NVME_TCP_OFFLOAD
> + tristate "NVM Express over Fabrics TCP offload common layer"
> + default m
Is this intentional, please?
> + depends on INET
> + depends on BLK_DEV_NVME
> + select NVME_FABRICS
> + help
> + This provides support for the NVMe over Fabrics protocol using
> + the TCP offload transport. This allows you to use remote block devices
> + exported using the NVMe protocol set.
> +
> + To configure a NVMe over Fabrics controller use the nvme-cli tool
> + from https://github.com/linux-nvme/nvme-cli.
> +
> + If unsure, say N.
I would expect that the default would be "n" so that people that are
not sure or do not care about NWMe just take the default. IMHO, it is
the usual behavior.
Best Regards,
Petr
Powered by blists - more mailing lists