[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160609134603.GA29820@lst.de>
Date: Thu, 9 Jun 2016 15:46:03 +0200
From: Christoph Hellwig <hch@....de>
To: "Nicholas A. Bellinger" <nab@...ux-iscsi.org>
Cc: Sagi Grimberg <sagi@...htbits.io>, Christoph Hellwig <hch@....de>,
axboe@...nel.dk, linux-block@...r.kernel.org,
linux-scsi <linux-scsi@...r.kernel.org>,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
keith.busch@...el.com, target-devel <target-devel@...r.kernel.org>
Subject: Re: NVMe over Fabrics target implementation
On Wed, Jun 08, 2016 at 09:36:15PM -0700, Nicholas A. Bellinger wrote:
> The configfs ABI should not dictate a single backend use-case.
And it doesn't. I actually had a file backend implemented to
benchmark it against the loopback driver. It needed absolutely
zero new configfs interface. And if we at some point want
different backends using different attributes we can trivially
add them using configfs_register_group.
> Along with having common code and existing configfs
> ABI, we also get a proper starting point for target-core
> features that span across endpoints, and are defined for
> both scsi and nvme. PR APTPL immediately comes to mind.
PRs are a useful feature on the road map. However we need a separate
pluggable backend anyway for distributed backends like RBD or Bart's
DLM implementation. Also the current LIO PR implementation will need
a lot of work to be usable for NVMe while actually following the
spec in all it's details and to be power Ń•afe. The right way to
go here is a PR API that allows different backends, and the existing
LIO one might be one of them after it's got the needed attention.
Powered by blists - more mailing lists