[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190816123036.GD5412@mellanox.com>
Date: Fri, 16 Aug 2019 12:30:41 +0000
From: Jason Gunthorpe <jgg@...lanox.com>
To: Christoph Hellwig <hch@....de>
CC: Jerome Glisse <jglisse@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Ben Skeggs <bskeggs@...hat.com>,
Felix Kuehling <Felix.Kuehling@....com>,
Ralph Campbell <rcampbell@...dia.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"nouveau@...ts.freedesktop.org" <nouveau@...ts.freedesktop.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"amd-gfx@...ts.freedesktop.org" <amd-gfx@...ts.freedesktop.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Fri, Aug 16, 2019 at 06:44:48AM +0200, Christoph Hellwig wrote:
> On Fri, Aug 16, 2019 at 12:43:07AM +0000, Jason Gunthorpe wrote:
> > On Thu, Aug 15, 2019 at 04:51:33PM -0400, Jerome Glisse wrote:
> >
> > > struct page. In this case any way we can update the
> > > nouveau_dmem_page() to check that page page->pgmap == the
> > > expected pgmap.
> >
> > I was also wondering if that is a problem.. just blindly doing a
> > container_of on the page->pgmap does seem like it assumes that only
> > this driver is using DEVICE_PRIVATE.
> >
> > It seems like something missing in hmm_range_fault, it should be told
> > what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE
> > and fault all others?
>
> The whole device private handling in hmm and migrate_vma seems pretty
> broken as far as I can tell, and I have some WIP patches. Basically we
> should not touch (or possibly eventually call migrate to ram eventually
> in the future) device private pages not owned by the caller, where I
> try to defined the caller by the dev_pagemap_ops instance.
I think it needs to be more elaborate.
For instance, a system may have multiple DEVICE_PRIVATE map's owned by
the same driver - but multiple physical devices using that driver.
Each physical device's driver should only ever get DEVICE_PRIVATE
pages for it's own on-device memory. Never a DEVICE_PRIVATE for
another device's memory.
The dev_pagemap_ops would not be unique enough, right?
Probably also clusters of same-driver struct device can share a
DEVICE_PRIVATE, at least high end GPU's now have private memory
coherency busses between their devices.
Since we want to trigger migration to CPU on incompatible
DEVICE_PRIVATE pages, it seems best to sort this out in the
hmm_range_fault?
Maybe some sort of unique ID inside the page->pgmap and passed as
input?
Jason
Powered by blists - more mailing lists