[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240620135540.GG2494510@nvidia.com>
Date: Thu, 20 Jun 2024 10:55:40 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Fuad Tabba <tabba@...gle.com>
Cc: Christoph Hellwig <hch@...radead.org>,
David Hildenbrand <david@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
Elliot Berman <quic_eberman@...cinc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Matthew Wilcox <willy@...radead.org>,
maz@...nel.org, kvm@...r.kernel.org, linux-arm-msm@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, pbonzini@...hat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning
On Thu, Jun 20, 2024 at 09:32:11AM +0100, Fuad Tabba wrote:
> Hi,
>
> On Thu, Jun 20, 2024 at 5:11 AM Christoph Hellwig <hch@...radead.org> wrote:
> >
> > On Wed, Jun 19, 2024 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> > > If you can't agree with the guest_memfd people on how to get there
> > > then maybe you need a guest_memfd2 for this slightly different special
> > > stuff instead of intruding on the core mm so much. (though that would
> > > be sad)
> >
> > Or we're just not going to support it at all. It's not like supporting
> > this weird usage model is a must-have for Linux to start with.
>
> Sorry, but could you please clarify to me what usage model you're
> referring to exactly, and why you think it's weird? It's just that we
> have covered a few things in this thread, and to me it's not clear if
> you're referring to protected VMs sharing memory, or being able to
> (conditionally) map a VM's memory that's backed by guest_memfd(), or
> if it's the Exclusive pin.
Personally I think mapping memory under guest_memfd is pretty weird.
I don't really understand why you end up with something different than
normal CC. Normal CC has memory that the VMM can access and memory it
cannot access. guest_memory is supposed to hold the memory the VMM cannot
reach, right?
So how does normal CC handle memory switching between private and
shared and why doesn't that work for pKVM? I think the normal CC path
effectively discards the memory content on these switches and is
slow. Are you trying to make the switch content preserving and faster?
If yes, why? What is wrong with the normal CC model of slow and
non-preserving shared memory? Are you trying to speed up IO in these
VMs by dynamically sharing pages instead of SWIOTLB?
Maybe this was all explained, but I reviewed your presentation and the
cover letter for the guest_memfd patches and I still don't see the why
in all of this.
Jason
Powered by blists - more mailing lists