lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Oct 2019 11:45:00 -0600
From:   Logan Gunthorpe <logang@...tatee.com>
To:     Alan Mikhak <alan.mikhak@...ive.com>,
        Christoph Hellwig <hch@...radead.org>
Cc:     linux-kernel@...r.kernel.org, martin.petersen@...cle.com,
        alexios.zavras@...el.com, ming.lei@...hat.com,
        gregkh@...uxfoundation.org, tglx@...utronix.de,
        Jason Gunthorpe <jgg@...pe.ca>, christophe.leroy@....fr,
        Palmer Dabbelt <palmer@...ive.com>,
        Paul Walmsley <paul.walmsley@...ive.com>
Subject: Re: [PATCH] scatterlist: Validate page before calling PageSlab()



On 2019-10-15 11:40 a.m., Alan Mikhak wrote:
> On Tue, Oct 15, 2019 at 2:55 AM Christoph Hellwig <hch@...radead.org> wrote:
>>
>> On Mon, Oct 07, 2019 at 02:13:51PM -0700, Alan Mikhak wrote:
>>>> My goal is to not modify the Linux NVMe target code at all. The NVMe
>>>> endpoint function driver currently does the work that is required.
>>
>> You will have to do some modifications, as for example in PCIe you can
>> have a n:1 relationship between SQs and CQs.  And you need to handle
>> the Create/Delete SQ/CQ commands, but not the fabrics commands.  And
>> modifying subsystems in Linux is perfectly acceptable, that is how they
>> improve.
> 
> The NVMe endpoint function driver currently creates the admin ACQ and
> ASQ on startup. When the NVMe host connects over PCIe, NVMe endpoint
> function driver handles the Create/Delete SQ/CQ commands and any other
> commands that cannot go to the NVMe target on behalf of the host. For
> example, it creates a pair of I/O CQ and SQ as requested by the Linux
> host kernel nvme.ko driver. The NVMe endpoint function driver supports
> Controller Memory Buffer (CMB). The I/O SQ is therefore located in CMB
> as requested by host nvme.ko.
> 
> As for n:1 relationship between SQs and CQs, I have not implemented that
> yet since I didn't get such a request from the host nvme.ko yet. I agree. It
> needs to be implemented at some point. It is doable. I appreciate your
> comment and took note of it.
> 
>>
>> Do you have a pointer to your code?
> 
> The code is still work in progress. It is not stable yet for reliable use or
> upstream patch submission. It is stable enough for me to see it work from
> my Debian host desktop to capture screenshots of NVMe partition
> benchmarking, formatting, mounting, file storage and retrieval activity
> such as I mentioned. I could look into possibly submitting an RFC patch
> upstream for early review and feedback to improve it but it is not in a
> polished state yet.
> 
>>
>>>> In my current platform, there are no page struct backing for the PCIe
>>>> memory address space.
>>
>> In Linux there aren't struct pages for physical memory remapped using
>> ioremap().  But if you want to feed them to the I/O subsystem you have
>> to use devm_memremap_pages to create a page backing.  Assuming you are
>> on a RISC-V platform given your affiliation you'll need to ensure your
>> kernel allows for ZONE_DEVICE pages, which Logan (added to Cc) has been
>> working on.  I don't remember what the current status is.
> 
> Thanks for this suggestion. I will try using devm_memremap_pages() to
> create a page backing. I will also look for Logan's work regarding
> ZONE_DEVICE pages.

The nvme driver already creates struct pages for the CMB with
devm_memremap_pages(). At least since v4.20. Though, it probably won't
do anything with the CMB on platforms that don't yet support ZONE_DEVICE
(ie riscv).

Logan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ