lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABEDWGwayE1aW26zfTqkYUVY-i=bTdM_Vm3htVB-x-AZQNvw2Q@mail.gmail.com>
Date:   Tue, 15 Oct 2019 10:40:28 -0700
From:   Alan Mikhak <alan.mikhak@...ive.com>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     linux-kernel@...r.kernel.org, martin.petersen@...cle.com,
        alexios.zavras@...el.com, ming.lei@...hat.com,
        gregkh@...uxfoundation.org, tglx@...utronix.de,
        Jason Gunthorpe <jgg@...pe.ca>, christophe.leroy@....fr,
        Palmer Dabbelt <palmer@...ive.com>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Logan Gunthorpe <logang@...tatee.com>
Subject: Re: [PATCH] scatterlist: Validate page before calling PageSlab()

On Tue, Oct 15, 2019 at 2:55 AM Christoph Hellwig <hch@...radead.org> wrote:
>
> On Mon, Oct 07, 2019 at 02:13:51PM -0700, Alan Mikhak wrote:
> > > My goal is to not modify the Linux NVMe target code at all. The NVMe
> > > endpoint function driver currently does the work that is required.
>
> You will have to do some modifications, as for example in PCIe you can
> have a n:1 relationship between SQs and CQs.  And you need to handle
> the Create/Delete SQ/CQ commands, but not the fabrics commands.  And
> modifying subsystems in Linux is perfectly acceptable, that is how they
> improve.

The NVMe endpoint function driver currently creates the admin ACQ and
ASQ on startup. When the NVMe host connects over PCIe, NVMe endpoint
function driver handles the Create/Delete SQ/CQ commands and any other
commands that cannot go to the NVMe target on behalf of the host. For
example, it creates a pair of I/O CQ and SQ as requested by the Linux
host kernel nvme.ko driver. The NVMe endpoint function driver supports
Controller Memory Buffer (CMB). The I/O SQ is therefore located in CMB
as requested by host nvme.ko.

As for n:1 relationship between SQs and CQs, I have not implemented that
yet since I didn't get such a request from the host nvme.ko yet. I agree. It
needs to be implemented at some point. It is doable. I appreciate your
comment and took note of it.

>
> Do you have a pointer to your code?

The code is still work in progress. It is not stable yet for reliable use or
upstream patch submission. It is stable enough for me to see it work from
my Debian host desktop to capture screenshots of NVMe partition
benchmarking, formatting, mounting, file storage and retrieval activity
such as I mentioned. I could look into possibly submitting an RFC patch
upstream for early review and feedback to improve it but it is not in a
polished state yet.

>
> > > In my current platform, there are no page struct backing for the PCIe
> > > memory address space.
>
> In Linux there aren't struct pages for physical memory remapped using
> ioremap().  But if you want to feed them to the I/O subsystem you have
> to use devm_memremap_pages to create a page backing.  Assuming you are
> on a RISC-V platform given your affiliation you'll need to ensure your
> kernel allows for ZONE_DEVICE pages, which Logan (added to Cc) has been
> working on.  I don't remember what the current status is.

Thanks for this suggestion. I will try using devm_memremap_pages() to
create a page backing. I will also look for Logan's work regarding
ZONE_DEVICE pages.

>
> > Please consider the following information and cost estimate in
> > bytes for requiring page structs for PCI memory if used with
> > scatterlists. For example, a 128GB PCI memory address space
> > could require as much as 256MB of system memory just for
> > page struct backing. In a 1GB 64-bit system with flat memory
> > model, that consumes 25% of available memory. However,
> > not all of the 128GB PCI memory may be mapped for use at
> > a given time depending on the application. The cost of PCI
> > page structs is an upfront cost to be paid at system start.
>
> I know the pages are costly.  But once you want to feed them through
> subsystems that do expect pages you'll have to do that.  And anything
> using scatterlists currently does.  A little hack here and there isn't
> going to solve that.

Feedback on this is clear. Page struct backing is required to use
scatterlists. Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ