[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fdd337b-fa35-a909-5eee-823bfd1e9dc4@deltatee.com>
Date: Fri, 1 Oct 2021 11:01:49 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: Jason Gunthorpe <jgg@...pe.ca>,
Alistair Popple <apopple@...dia.com>,
Felix Kuehling <Felix.Kuehling@....com>,
Christoph Hellwig <hch@....de>,
Dan Williams <dan.j.williams@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org, linux-pci@...r.kernel.org,
linux-mm@...ck.org, iommu@...ts.linux-foundation.org,
Stephen Bates <sbates@...thlin.com>,
Christian König <christian.koenig@....com>,
John Hubbard <jhubbard@...dia.com>,
Don Dutile <ddutile@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Jakowski Andrzej <andrzej.jakowski@...el.com>,
Minturn Dave B <dave.b.minturn@...el.com>,
Jason Ekstrand <jason@...kstrand.net>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Xiong Jianxin <jianxin.xiong@...el.com>,
Bjorn Helgaas <helgaas@...nel.org>,
Ira Weiny <ira.weiny@...el.com>,
Robin Murphy <robin.murphy@....com>,
Martin Oliveira <martin.oliveira@...eticom.com>,
Chaitanya Kulkarni <ckulkarnilinux@...il.com>
Subject: Re: [PATCH v3 19/20] PCI/P2PDMA: introduce pci_mmap_p2pmem()
On 2021-10-01 7:48 a.m., Jason Gunthorpe wrote:
> On Wed, Sep 29, 2021 at 09:36:52PM -0300, Jason Gunthorpe wrote:
>
>> Why would DAX want to do this in the first place?? This means the
>> address space zap is much more important that just speeding up
>> destruction, it is essential for correctness since the PTEs are not
>> holding refcounts naturally...
>
> It is not really for this series to fix, but I think the whole thing
> is probably racy once you start allowing pte_special pages to be
> accessed by GUP.
>
> If we look at unmapping the PTE relative to GUP fast the important
> sequence is how the TLB flushing doesn't decrement the page refcount
> until after it knows any concurrent GUP fast is completed. This is
> arch specific, eg it could be done async through a call_rcu handler.
>
> This ensures that pages can't cross back into the free pool and be
> reallocated until we know for certain that nobody is walking the PTEs
> and could potentially take an additional reference on it. The scheme
> cannot rely on the page refcount being 0 because oce it goes into the
> free pool it could be immeidately reallocated back to a non-zero
> refcount.
>
> A DAX user that simply does an address space invalidation doesn't
> sequence itself with any of this mechanism. So we can race with a
> thread doing GUP fast and another thread re-cycling the page into
> another use - creating a leakage of the page from one security context
> to another.
>
> This seems to be made worse for the pgmap stuff due to the wonky
> refcount usage - at least if the refcount had dropped to zero gup fast
> would be blocked for a time, but even that doesn't happen.
>
> In short, I think using pg special for anything that can be returned
> by gup fast (and maybe even gup!) is racy/wrong. We must have the
> normal refcount mechanism work for correctness of the recycling flow.
I'm not quite following all of this. I'm not entirely sure how fs/dax
works in this regard, but for device-dax (and similarly p2pdma) it
doesn't seem as bad as you say.
In device-dax, the refcount is only used to prevent the device, and
therefore the pages, from going away on device unbind. Pages cannot be
recycled, as you say, as they are mapped linearly within the device. The
address space invalidation is done only when the device is unbound.
Before the invalidation, an active flag is cleared to ensure no new
mappings can be created while the unmap is proceeding.
unmap_mapping_range() should sequence itself with the TLB flush and
GUP-fast using the same mechanism it does for regular pages. As far as I
can see, by the time unmap_mapping_range() returns, we should be
confident that there are no pages left in any mapping (seeing no new
pages could be added since before the call). Then before finishing the
unbind, device-dax decrements the refcount of all pages and then waits
for the refcount of all pages to go to zero. Thus, any pages that
successfully were got with GUP, during or before unmap_mapping_range
should hold a reference and once all those references are returned,
unbind can finish.
P2PDMA follows this pattern, except pages are not mapped linearly and
are returned to the genalloc when their refcount falls to 1. This only
happens after a VMA is closed which should imply the PTEs have already
been unlinked from the pages. And the same situation occurs on unbind
with a flag preventing new mappings from being created before
unmap_mapping_range(), etc.
Not to say that all this couldn't use a big conceptual cleanup. A
similar question exists with the single find_special_page() user
(xen/gntdev) and it's definitely not clear what the differences are
between the find_special_page() and vmf_insert_mixed() techniques and
when one should be used over the other. Or could they both be merged to
use the same technique?
Logan
Powered by blists - more mailing lists