[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240716201103.GE1482543@nvidia.com>
Date: Tue, 16 Jul 2024 17:11:03 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Ackerley Tng <ackerleytng@...gle.com>, quic_eberman@...cinc.com,
akpm@...ux-foundation.org, david@...hat.com, kvm@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-mm@...ck.org, maz@...nel.org,
pbonzini@...hat.com, shuah@...nel.org, tabba@...gle.com,
willy@...radead.org, vannapurve@...gle.com, hch@...radead.org,
rientjes@...gle.com, jhubbard@...dia.com, qperret@...gle.com,
smostafa@...gle.com, fvdl@...gle.com, hughd@...gle.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning
On Tue, Jul 16, 2024 at 10:34:55AM -0700, Sean Christopherson wrote:
> On Tue, Jul 16, 2024, Jason Gunthorpe wrote:
> > On Tue, Jul 16, 2024 at 09:03:00AM -0700, Sean Christopherson wrote:
> >
> > > > + To support huge pages, guest_memfd will take ownership of the hugepages, and
> > > > provide interested parties (userspace, KVM, iommu) with pages to be used.
> > > > + guest_memfd will track usage of (sub)pages, for both private and shared
> > > > memory
> > > > + Pages will be broken into smaller (probably 4K) chunks at creation time to
> > > > simplify implementation (as opposed to splitting at runtime when private to
> > > > shared conversion is requested by the guest)
> > >
> > > FWIW, I doubt we'll ever release a version with mmap()+guest_memfd support that
> > > shatters pages at creation. I can see it being an intermediate step, e.g. to
> > > prove correctness and provide a bisection point, but shattering hugepages at
> > > creation would effectively make hugepage support useless.
> >
> > Why? If the private memory retains its contiguity seperately but the
> > struct pages are removed from the vmemmap, what is the downside?
>
> Oooh, you're talking about shattering only the host userspace mappings. Now I
> understand why there was a bit of a disconnect, I was thinking you (hand-wavy
> everyone) were saying that KVM would immediately shatter its own mappings too.
Right, I'm imagining that guestmemfd keep track of the physical ranges
in something else, like a maple tree, xarray or heck a SW radix page
table perhaps. It does not use struct pages. Then it has, say, a
bitmap indicating what 4k granuals are shared.
When kvm or the private world needs the physical addresses it reads
them out of that structure and it always sees perfectly physically
contiguous data regardless of any shared/private stuff.
It is not so much "broken at creation time", but more that guest memfd
does not use struct pages at all for private mappings and thus we can
setup the unused struct pages however we like, including removing them
from the vmemmap or preconfiguring them for order 0 granuals.
There is definitely some detailed datastructure work here to allow
guestmemfd to manage all of this efficiently and be effective for 4k
and 1G cases.
Jason
Powered by blists - more mailing lists